Trivedi, Prinal; Edwards, Jode W; Wang, Jelai; Gadbury, Gary L; Srinivasasainagendra, Vinodh; Zakharkin, Stanislav O; Kim, Kyoungmi; Mehta, Tapan; Brand, Jacob P L; Patki, Amit; Page, Grier P; Allison, David B
2005-04-06
Many efforts in microarray data analysis are focused on providing tools and methods for the qualitative analysis of microarray data. HDBStat! (High-Dimensional Biology-Statistics) is a software package designed for analysis of high dimensional biology data such as microarray data. It was initially developed for the analysis of microarray gene expression data, but it can also be used for some applications in proteomics and other aspects of genomics. HDBStat! provides statisticians and biologists a flexible and easy-to-use interface to analyze complex microarray data using a variety of methods for data preprocessing, quality control analysis and hypothesis testing. Results generated from data preprocessing methods, quality control analysis and hypothesis testing methods are output in the form of Excel CSV tables, graphs and an Html report summarizing data analysis. HDBStat! is a platform-independent software that is freely available to academic institutions and non-profit organizations. It can be downloaded from our website http://www.soph.uab.edu/ssg_content.asp?id=1164.
PyChimera: use UCSF Chimera modules in any Python 2.7 project.
Rodríguez-Guerra Pedregal, Jaime; Maréchal, Jean-Didier
2018-05-15
UCSF Chimera is a powerful visualization tool remarkably present in the computational chemistry and structural biology communities. Built on a C++ core wrapped under a Python 2.7 environment, one could expect to easily import UCSF Chimera's arsenal of resources in custom scripts or software projects. Nonetheless, this is not readily possible if the script is not executed within UCSF Chimera due to the isolation of the platform. UCSF ChimeraX, successor to the original Chimera, partially solves the problem but yet major upgrades need to be undergone so that this updated version can offer all UCSF Chimera features. PyChimera has been developed to overcome these limitations and provide access to the UCSF Chimera codebase from any Python 2.7 interpreter, including interactive programming with tools like IPython and Jupyter Notebooks, making it easier to use with additional third-party software. PyChimera is LGPL-licensed and available at https://github.com/insilichem/pychimera. jaime.rodriguezguerra@uab.cat or jeandidier.marechal@uab.cat. Supplementary data are available at Bioinformatics online.
41 CFR 302-7.305 - When must my agency ship my UAB?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 41 Public Contracts and Property Management 4 2012-07-01 2012-07-01 false When must my agency ship my UAB? 302-7.305 Section 302-7.305 Public Contracts and Property Management Federal Travel....305 When must my agency ship my UAB? Your agency must ship your UAB in time to ensure that your...
41 CFR 302-7.305 - When must my agency ship my UAB?
Code of Federal Regulations, 2013 CFR
2013-07-01
... 41 Public Contracts and Property Management 4 2013-07-01 2012-07-01 true When must my agency ship my UAB? 302-7.305 Section 302-7.305 Public Contracts and Property Management Federal Travel....305 When must my agency ship my UAB? Your agency must ship your UAB in time to ensure that your...
41 CFR 302-7.305 - When must my agency ship my UAB?
Code of Federal Regulations, 2014 CFR
2014-07-01
... 41 Public Contracts and Property Management 4 2014-07-01 2014-07-01 false When must my agency ship my UAB? 302-7.305 Section 302-7.305 Public Contracts and Property Management Federal Travel... Baggage Allowance § 302-7.305 When must my agency ship my UAB? Your agency must ship your UAB in time to...
The Other Bladder Syndrome: Underactive Bladder
Miyazato, Minoru; Yoshimura, Naoki; Chancellor, Michael B
2013-01-01
Detrusor underactivity, or underactive bladder (UAB), is defined as a contraction of reduced strength and/or duration resulting in prolonged bladder emptying and/or a failure to achieve complete bladder emptying within a normal time span. UAB can be observed in many neurologic conditions and myogenic failure. Diabetic cystopathy is the most important and inevitable disease developing from UAB, and can occur silently and early in the disease course. Careful neurologic and urodynamic examinations are necessary for the diagnosis of UAB. Proper management is focused on prevention of upper tract damage, avoidance of overdistension, and reduction of residual urine. Scheduled voiding, double voiding, al-blockers, and intermittent self-catheterization are the typical conservative treatment options. Sacral nerve stimulation may be an effective treatment option for UAB. New concepts such as stem cell therapy and neurotrophic gene therapy are being explored. Other new agents for UAB that act on prostaglandin E2 and EP2 receptors are currently under development. The pharmaceutical and biotechnology industries that have a pipeline in urology and women’s health may want to consider UAB as a potential target condition. Scientific counsel and review of the current pharmaceutical portfolio may uncover agents, including those in other therapeutic fields, that may benefit the management of UAB. PMID:23671401
UAB UT Annual Report : 2010-2011
DOT National Transportation Integrated Search
2011-01-01
The UAB UTC's theme, Traffic Safety and Injury Control was an excellent fit for the UAB Injury Control : Research Centers (ICRC) faculty, because it complemented the ICRCs Mission, which was: To help the : nation achieve a significant reduct...
Genomics of Three New Bacteriophages Useful in the Biocontrol of Salmonella
Bardina, Carlota; Colom, Joan; Spricigo, Denis A.; Otero, Jennifer; Sánchez-Osuna, Miquel; Cortés, Pilar; Llagostera, Montserrat
2016-01-01
Non-typhoid Salmonella is the principal pathogen related to food-borne diseases throughout the world. Widespread antibiotic resistance has adversely affected human health and has encouraged the search for alternative antimicrobial agents. The advances in bacteriophage therapy highlight their use in controlling a broad spectrum of food-borne pathogens. One requirement for the use of bacteriophages as antibacterials is the characterization of their genomes. In this work, complete genome sequencing and molecular analyses were carried out for three new virulent Salmonella-specific bacteriophages (UAB_Phi20, UAB_Phi78, and UAB_Phi87) able to infect a broad range of Salmonella strains. Sequence analysis of the genomes of UAB_Phi20, UAB_Phi78, and UAB_Phi87 bacteriophages did not evidence the presence of known virulence-associated and antibiotic resistance genes, and potential immunoreactive food allergens. The UAB_Phi20 genome comprised 41,809 base pairs with 80 open reading frames (ORFs); 24 of them with assigned function. Genome sequence showed a high homology of UAB_Phi20 with Salmonella bacteriophage P22 and other P22likeviruses genus of the Podoviridae family, including ST64T and ST104. The DNA of UAB_Phi78 contained 44,110 bp including direct terminal repeats (DTR) of 179 bp and 58 putative ORFs were predicted and 20 were assigned function. This bacteriophage was assigned to the SP6likeviruses genus of the Podoviridae family based on its high similarity not only with SP6 but also with the K1-5, K1E, and K1F bacteriophages, all of which infect Escherichia coli. The UAB_Phi87 genome sequence consisted of 87,669 bp with terminal direct repeats of 608 bp; although 148 ORFs were identified, putative functions could be assigned to only 29 of them. Sequence comparisons revealed the mosaic structure of UAB_Phi87 and its high similarity with bacteriophages Felix O1 and wV8 of E. coli with respect to genetic content and functional organization. Phylogenetic analysis of large terminase subunits confirms their packaging strategies and grouping to the different phage genus type. All these studies are necessary for the development and the use of an efficient cocktail with commercial applications in bacteriophage therapy against Salmonella. PMID:27148229
Capturing & Reusing Air Handler Condensate without Digging up the Campus
ERIC Educational Resources Information Center
Pruitt, Olen L.; Guccione, Patrick D.
2013-01-01
The University of Alabama at Birmingham (UAB) has experienced rapid growth in the last decade. In addition to educating over 17,000 students annually, UAB is a major research facility and academic healthcare center. Due to growth and expansion, UAB requires more and more water to sustain operations. The central chilled water system serves over…
Pre-Clinical Evaluation of a Novel RXR Agonist for the Treatment of Neuroblastoma
Waters, Alicia M.; Stewart, Jerry E.; Atigadda, Venkatram R.; Mroczek-Musulman, Elizabeth; Muccio, Donald D.; Grubbs, Clinton J.; Beierle, Elizabeth A.
2015-01-01
Neuroblastoma remains a common cause of pediatric cancer deaths, especially for children who present with advanced stage or recurrent disease. Currently, retinoic acid therapy is used as maintenance treatment to induce differentiation and reduce tumor recurrence following induction therapy for neuroblastoma, but unavoidable side effects are seen. A novel retinoid, UAB30, has been shown to generate negligible toxicities. In the current study, we hypothesized that UAB30 would have a significant impact on multiple neuroblastoma cell lines in vitro and in vivo. Cellular survival, cell cycle analysis, migration, and invasion were studied using alamarBlue® assays, FACS, and Transwell® assays, respectively, in multiple cell lines following treatment with UAB30. In addition, an in vivo murine model of human neuroblastoma was utilized to study the effects of UAB30 upon tumor xenograft growth and animal survival. We successfully demonstrated decreased cellular survival, invasion and migration, cell cycle arrest and increased apoptosis after treatment with UAB30. Furthermore, inhibition of tumor growth and increased survival was observed in a murine neuroblastoma xenograft model. The results of these in vitro and in vivo studies suggest a potential therapeutic role for the low toxicity synthetic retinoid X receptor selective agonist, UAB30, in neuroblastoma treatment. PMID:25944918
Pre-Clinical Evaluation of UAB30 in Pediatric Renal and Hepatic Malignancies
Waters, Alicia M.; Stewart, Jerry E.; Atigadda, Venkatram R.; Mroczek-Musulman, Elizabeth; Muccio, Donald D.; Grubbs, Clinton J.; Beierle, Elizabeth A.
2016-01-01
Rare tumors of solid organs remain some of the most difficult pediatric cancers to cure. These difficult tumors include rare pediatric renal malignancies such as malignant rhabdoid kidney tumors (MRKT) and non-osseous renal Ewing sarcoma, and hepatoblastoma, a pediatric liver tumor that arises from immature liver cells. There are data in adult renal and hepatic malignancies demonstrating the efficacy of retinoid therapy. The investigation of retinoic acid therapy in cancer is not a new strategy, but the widespread adoption of this therapy has been hindered by toxicities. Our laboratory has been investigating a novel synthetic rexinoid, UAB30, which exhibits a more favorable side effect profile. In this current study, we hypothesized that UAB30 would diminish the growth of tumor cells from both rare renal and liver tumors in vitro and in vivo. We successfully demonstrated decreased cellular proliferation, invasion and migration, cell cycle arrest and increased apoptosis after treatment with UAB30. Additionally, in in vivo murine models of human hepatoblastoma or rare human renal tumors, there were significantly decreased tumor xenograft growth and increased animal survival after UAB30 treatment. UAB30 should further be investigated as a developing therapeutic in these rare and difficult to treat, pediatric solid organ tumors. PMID:26873726
Chou, Chu-Fang; Hsieh, Yu-Hua; Grubbs, Clinton J; Atigadda, Venkatram R; Mobley, James A; Dummer, Reinhard; Muccio, Donald D; Eto, Isao; Elmets, Craig A; Garvey, W Timothy; Chang, Pi-Ling
2018-06-01
Bexarotene (Targretin ® ) is currently the only FDA approved retinoid X receptor (RXR) -selective agonist for the treatment of cutaneous T-cell lymphomas (CTCLs). The main side effects of bexarotene are hypothyroidism and elevation of serum triglycerides (TGs). The novel RXR ligand, 9-cis UAB30 (UAB30) does not elevate serum TGs or induce hypothyroidism in normal subjects. To assess preclinical efficacy and mechanism of action of UAB30 in the treatment of CTCLs and compare its action with bexarotene. With patient-derived CTCL cell lines, we evaluated UAB30 function in regulating growth, apoptosis, cell cycle check points, and cell cycle-related markers. Compared to bexarotene, UAB30 had lower half maximal inhibitory concentration (IC 50 ) values and was more effective in inhibiting the G1 cell cycle checkpoint. Both rexinoids increased the stability of the cell cycle inhibitor, p27kip1 protein, in part, through targeting components involved in the ubiquitination-proteasome system: 1) decreasing SKP2, a F-box protein that binds and targets p27kip1 for degradation by 26S proteasome and 2) suppressing 20S proteasome activity (cell line-dependent) through downregulation of PSMA7, a component of the 20S proteolytic complex in 26S proteasome. UAB30 and bexarotene induce both early cell apoptosis and suppress cell proliferation. Inhibition of the G1 to S cell cycle transition by rexinoids is mediated, in part, through downregulation of SKP2 and/or 20S proteasome activity, leading to increased p27kip1 protein stability. Because UAB30 has minimal effect in elevating serum TGs and inducing hypothyroidism, it is potentially a better alternative to bexarotene for the treatment of CTCLs. Copyright © 2018 Japanese Society for Investigative Dermatology. Published by Elsevier B.V. All rights reserved.
Genotype calling from next-generation sequencing data using haplotype information of reads
Zhi, Degui; Wu, Jihua; Liu, Nianjun; Zhang, Kui
2012-01-01
Motivation: Low coverage sequencing provides an economic strategy for whole genome sequencing. When sequencing a set of individuals, genotype calling can be challenging due to low sequencing coverage. Linkage disequilibrium (LD) based refinement of genotyping calling is essential to improve the accuracy. Current LD-based methods use read counts or genotype likelihoods at individual potential polymorphic sites (PPSs). Reads that span multiple PPSs (jumping reads) can provide additional haplotype information overlooked by current methods. Results: In this article, we introduce a new Hidden Markov Model (HMM)-based method that can take into account jumping reads information across adjacent PPSs and implement it in the HapSeq program. Our method extends the HMM in Thunder and explicitly models jumping reads information as emission probabilities conditional on the states of adjacent PPSs. Our simulation results show that, compared to Thunder, HapSeq reduces the genotyping error rate by 30%, from 0.86% to 0.60%. The results from the 1000 Genomes Project show that HapSeq reduces the genotyping error rate by 12 and 9%, from 2.24% and 2.76% to 1.97% and 2.50% for individuals with European and African ancestry, respectively. We expect our program can improve genotyping qualities of the large number of ongoing and planned whole genome sequencing projects. Contact: dzhi@ms.soph.uab.edu; kzhang@ms.soph.uab.edu Availability: The software package HapSeq and its manual can be found and downloaded at www.ssg.uab.edu/hapseq/. Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22285565
Pathophysiology and animal modeling of underactive bladder.
Tyagi, Pradeep; Smith, Phillip P; Kuchel, George A; de Groat, William C; Birder, Lori A; Chermansky, Christopher J; Adam, Rosalyn M; Tse, Vincent; Chancellor, Michael B; Yoshimura, Naoki
2014-09-01
While the symptomology of underactive bladder (UAB) may imply a primary dysfunction of the detrusor muscle, insights into pathophysiology indicate that both myogenic and neurogenic mechanisms need to be considered. Due to lack of proper animal models, the current understanding of the UAB pathophysiology is limited, and much of what is known about the clinical etiology of the condition has been derived from epidemiological data. We hereby review current state of the art in the understanding of the pathophysiology of and animal models used to study the UAB.
GATE Center of Excellence at UAB in Lightweight Materials for Automotive Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2011-07-31
This report summarizes the accomplishments of the UAB GATE Center of Excellence in Lightweight Materials for Automotive Applications. The first Phase of the UAB DOE GATE center spanned the period 2005-2011. The UAB GATE goals coordinated with the overall goals of DOE's FreedomCAR and Vehicles Technologies initiative and DOE GATE program. The FCVT goals are: (1) Development and validation of advanced materials and manufacturing technologies to significantly reduce automotive vehicle body and chassis weight without compromising other attributes such as safety, performance, recyclability, and cost; (2) To provide a new generation of engineers and scientists with knowledge and skills inmore » advanced automotive technologies. The UAB GATE focused on both the FCVT and GATE goals in the following manner: (1) Train and produce graduates in lightweight automotive materials technologies; (2) Structure the engineering curricula to produce specialists in the automotive area; (3) Leverage automotive related industry in the State of Alabama; (4) Expose minority students to advanced technologies early in their career; (5) Develop innovative virtual classroom capabilities tied to real manufacturing operations; and (6) Integrate synergistic, multi-departmental activities to produce new product and manufacturing technologies for more damage tolerant, cost-effective, and lighter automotive structures.« less
Pathophysiology and animal modeling of underactive bladder
Tyagi, Pradeep; Smith, Phillip P.; Kuchel, George A.; de Groat, William C.; Birder, Lori A.; Chermansky, Christopher J.; Adam, Rosalyn M.; Tse, Vincent; Chancellor, Michael B.; Yoshimura, Naoki
2015-01-01
While the symptomology of underactive bladder (UAB) may imply a primary dysfunction of the detrusor muscle, insights into pathophysiology indicate that both myogenic and neurogenic mechanisms need to be considered. Due to lack of proper animal models, the current understanding of the UAB pathophysiology is limited, and much of what is known about the clinical etiology of the condition has been derived from epidemiological data. We hereby review current state of the art in the understanding of the pathophysiology of and animal models used to study the UAB. PMID:25238890
Feng, Qing; Song, Young-Chae; Yoo, Kyuseon; Kuppanan, Nanthakumar; Subudhi, Sanjukta; Lal, Banwari
2018-08-01
The influence of polarized electrodes on the methane production, which depends on the sludge concentration, was investigated in upflow anaerobic bioelectrochemical (UABE) reactor. When the polarized electrode was placed in the bottom zone with a high sludge concentration, the methane production was 5.34 L/L.d, which was 53% higher than upflow anaerobic sludge blanket (UASB) reactor. However, the methane production was reduced to 4.34 L/L.d by placing the electrode in the upper zone of the UABE reactor with lower sludge concentration. In the UABE reactor, the methane production was mainly improved by the enhanced biological direct interspecies electron transfer (bDIET) pathway, and the methane production via the electrode was a minor fraction of less than 4% of total methane production. The polarized electrodes that placed in the bottom zone with a high sludge concentration enhance the bDIET for methane production in the UABE reactor and greatly improve the methane production. Copyright © 2018. Published by Elsevier Ltd.
Norms of certain Jordan elementary operators
NASA Astrophysics Data System (ADS)
Zhang, Xiaoli; Ji, Guoxing
2008-10-01
Let be a complex Hilbert space and let denote the algebra of all bounded linear operators on . For , the Jordan elementary operator UA,B is defined by UA,B(X)=AXB+BXA, . In this short note, we discuss the norm of UA,B. We show that if and ||UA,B||=||A||||B||, then either AB* or B*A is 0. We give some examples of Jordan elementary operators UA,B such that ||UA,B||=||A||||B|| but AB*[not equal to]0 and B*A[not equal to]0, which answer negatively a question posed by M. Boumazgour in [M. Boumazgour, Norm inequalities for sums of two basic elementary operators, J. Math. Anal. Appl. 342 (2008) 386-393].
USDA-ARS?s Scientific Manuscript database
Leafy spurge (Euphorbia esula L.) is an herbaceous perennial weed that maintains its perennial growth habit through generation of underground adventitious buds (UABs) on the crown and lateral roots. These UABs undergo seasonal phases of dormancy under natural conditions, namely para-, endo-, and eco...
DOT National Transportation Integrated Search
2010-12-01
A number of initiatives were undertaken to support education, training, and technology transfer objectives related to UAB UTC Domain 2 Project: Development of a Dynamic Traffic Assignment and Simulation Model for Incident and Emergency Management App...
Significance of the Bacteriophage Treatment Schedule in Reducing Salmonella Colonization of Poultry
Bardina, Carlota; Spricigo, Denis A.; Cortés, Pilar
2012-01-01
Salmonella remains the major cause of food-borne diseases worldwide, with chickens known to be the main reservoir for this zoonotic pathogen. Among the many approaches to reducing Salmonella colonization of broilers, bacteriophage offers several advantages. In this study, three bacteriophages (UAB_Phi20, UAB_Phi78, and UAB_Phi87) obtained from our collection that exhibited a broad host range against Salmonella enterica serovar Enteritidis and Salmonella enterica serovar Typhimurium were characterized with respect to morphology, genome size, and restriction patterns. A cocktail composed of the three bacteriophages was more effective in promoting the lysis of S. Enteritidis and S. Typhimurium cultures than any of the three bacteriophages alone. In addition, the cocktail was able to lyse the Salmonella enterica serovars Virchow, Hadar, and Infantis. The effectiveness of the bacteriophage cocktail in reducing the concentration of S. Typhimurium was tested in two animal models using different treatment schedules. In the mouse model, 50% survival was obtained when the cocktail was administered simultaneously with bacterial infection and again at 6, 24, and 30 h postinfection. Likewise, in the White Leghorn chicken specific-pathogen-free (SPF) model, the best results, defined as a reduction of Salmonella concentration in the chicken cecum, were obtained when the bacteriophage cocktail was administered 1 day before or just after bacterial infection and then again on different days postinfection. Our results show that frequent treatment of the chickens with bacteriophage, and especially prior to colonization of the intestinal tract by Salmonella, is required to achieve effective bacterial reduction over time. PMID:22773654
Brezovich, Ivan A; Popple, Richard A; Duan, Jun; Shen, Sui; Wu, Xingen; Benhabib, Sidi; Huang, Mi; Cardan, Rex A
2016-07-08
Stereotactic radiosurgery (SRS) places great demands on spatial accuracy. Steel BBs used as markers in quality assurance (QA) phantoms are clearly visible in MV and planar kV images, but artifacts compromise cone-beam CT (CBCT) isocenter localization. The purpose of this work was to develop a QA phantom for measuring with sub-mm accuracy isocenter congruence of planar kV, MV, and CBCT imaging systems and to design a practical QA procedure that includes daily Winston-Lutz (WL) tests and does not require computer aid. The salient feature of the phantom (Universal Alignment Ball (UAB)) is a novel marker for precisely localizing isocenters of CBCT, planar kV, and MV beams. It consists of a 25.4mm diameter sphere of polymethylmetacrylate (PMMA) containing a concentric 6.35mm diameter tungsten carbide ball. The large density difference between PMMA and the polystyrene foam in which the PMMA sphere is embedded yields a sharp image of the sphere for accurate CBCT registration. The tungsten carbide ball serves in finding isocenter in planar kV and MV images and in doing WL tests. With the aid of the UAB, CBCT isocenter was located within 0.10 ± 0.05 mm of its true positon, and MV isocenter was pinpointed in planar images to within 0.06 ± 0.04mm. In clinical morning QA tests extending over an 18 months period the UAB consistently yielded measurements with sub-mm accuracy. The average distance between isocenter defined by orthogonal kV images and CBCT measured 0.16 ± 0.12 mm. In WL tests the central ray of anterior beams defined by a 1.5 × 1.5 cm2 MLC field agreed with CBCT isocenter within 0.03 ± 0.14 mm in the lateral direction and within 0.10 ± 0.19 mm in the longitudinal direction. Lateral MV beams approached CBCT isocenter within 0.00 ± 0.11 mm in the vertical direction and within -0.14 ± 0.15 mm longitudinally. It took therapists about 10 min to do the tests. The novel QA phantom allows pinpointing CBCT and MV isocenter positions to better than 0.2 mm, using visual image registration. Under CBCT guidance, MLC-defined beams are deliverable with sub-mm spatial accuracy. The QA procedure is practical for daily tests by therapists. © 2016 The Authors
The Regional Autopsy Center: The University of Alabama at Birmingham Experience.
Atherton, Daniel Stephen; Reilly, Stephanie
2017-09-01
Rates of autopsied deaths have decreased significantly for the last several decades. It may not be practical for some institutions to maintain the facilities and staffing required to perform autopsies. In recent years, the University of Alabama at Birmingham (UAB) has established contracts to perform autopsies for several regional institutions including the Alabama Department of Forensic Sciences (ADFS), the United States Veterans Affairs, the local prison system, local community hospitals, and with families for private autopsy services. Contracts and autopsy data from 2004 to 2015 were obtained and reviewed. Since 2004, the number of UAB hospital autopsies trended slightly downward. On average, UAB hospital cases comprised most yearly cases, and the ADFS was the second largest contributor of cases. Income generated from outside autopsies performed from 2006 to 2015 totaled just more than 2 million dollars, and most of the income was generated from referred ADFS cases. This study provides evidence that a centralized institution (regional autopsy center [RAC]) can provide regional autopsy service in a practical, feasible, and economically viable manner, and a RAC can benefit both the referring institutions as well as the RAC itself.
Immunohistochemical detection of p53 protein in ameloblastoma types.
el-Sissy, N A
1999-05-01
Overexpression of p53 protein in unicystic ameloblastoma (uAB) is denser than in the conventional ameloblastoma (cAB) type, indicating increased wild type p53--suppressing the growth potential of uAB and denoting the early event of neoplastic transformation, probably of a previous odontogenic cyst. Overexpression of p53 in borderline cAB and malignant ameloblastoma (mAB) types might reflect a mutational p53 protein playing an oncogenic role, promoting tumour growth. Overexpression of p53 protein could be a valid screening method for predicting underlying malignant genetic changes in AB types, through increased frequency of immunoreactive cells or increased staining density.
2014-12-01
Hospitals around the country have stepped up their efforts to train staff and implement procedures to ensure the safe identification and management of any patients with signs of Ebola virus disease (EVD). Ronald Reagan UCLA Medical Center in Los Angeles, CA, held an "Ebola preparedness exercise" to give staff an opportunity to walk through the hospital's protocol for handling a simulated patient with EVD. The University of Alabama at Birmingham (UAB) Medical Center has held similar exercises, and is now holding twice-weekly meetings of its leadership team to make sure that all new developments in the Ebola outbreak are communicated. UCLA Medical Center has prepared PPE kits based on the practices developed at Emory University Hospital, which has thus far had the most experience in this country in caring for patients with EVD. The UCLA Health System has adjusted its medical record system so that a red flag is placed on the electronic medical record [EMR] of any patient who has recently traveled to a high-risk area. UAB Medical Center has incorporated what had been a paper-and-pencil screening tool for EVD into its electronic medical record. Training on PPE as well as EVD screening is being provided to first-responders and 911 call center dispatchers in the UAB system.
Frölich, Michael A; Banks, Catiffaney; Brooks, Amber; Sellers, Alethia; Swain, Ryan; Cooper, Lauren
2014-11-01
The number of reported pregnancy-related deaths in the United States steadily increased from 7.2 deaths per 100,000 live births in 1987 to a high of 17.8 deaths per 100,000 live births in 2009. Compared to Caucasian women, African American women were nearly 4 times as likely to die from childbirth. To better understand the reason for this trend, we conducted a case-control study at University of Alabama at Birmingham (UAB) Hospital. Our primary study hypothesis was that women who died at UAB were more likely to be African American than women in a control group who delivered an infant at UAB and did not die. We expected to find a difference in race proportions and other patient characteristics that would further help to elucidate the cause of a racial disparity in maternal deaths. We reviewed all maternal deaths (cases) at UAB Hospital from January 1990 through December 2010 identified based on electronic uniform billing data and ICD-9 codes. Each maternal death was matched 2:1 with women who delivered at a time that most closely coincided with the time of the maternal death in 2-step selection process (electronic identification and manual confirmation). Maternal variables obtained were comorbidities, duration of hospital stay, cause of death, race, distance from home to hospital, income, prenatal care, body mass index, parity, insurance type, mode of delivery, and marital status. The strength of univariate associations of maternal variables and case/control status was calculated. The association of case/control status and race was also examined after controlling for residential distance from the hospital. There was insufficient evidence to suggest racial disparity in maternal death. The proportion of African American women was 57% (42 of 77) in the maternal death group and 61% (94 of 154) in the control group (P = 0.23). The univariate odds ratio for maternal death for African American to Caucasian race was 0.66 (95% confidence interval [CI], 0.37-1.19); the adjusted odds ratio was 1.46 (95% CI, 0.73-3.01). Longer compared with shorter distance of residence to the hospital was a highly significant predictor (P < 0.001) of maternal death. We did not observe a racial disparity in maternal deaths at UAB Hospital. We suggest that the next step toward understanding racial differences in maternal deaths reported in the United States should be directed at the health care delivery outside the tertiary care hospital setting, particularly at eliminating access barriers to health care for all women.
Miller, Joseph H; Zywicke, Holly A; Fleming, James B; Griessenauer, Christoph J; Whisenhunt, Thomas R; Okor, Mamerhi O; Harrigan, Mark R; Pritchard, Patrick R; Hadley, Mark N
2013-06-01
The April 27, 2011, tornados that affected the southeastern US resulted in 248 deaths in the state of Alabama. The University of Alabama at Birmingham (UAB) Medical Center, the largest Level I trauma center in the state, triaged and treated a large number of individuals who suffered traumatic injuries during these events, including those requiring neurosurgical assessment and treatment. A retrospective review of all adult patients triaged at UAB Medical Center during the April 27, 2011, tornados was conducted. Those patients who were diagnosed with and treated for neurosurgical injuries were included in this cohort. The Division of Neurosurgery at UAB Medical Center received 37 consultations in the 36 hours following the tornado disaster. An additional patient presented 6 days later, having suffered a lumbar spine fracture that ultimately required operative intervention. Twenty-seven patients (73%) suffered injuries as a direct result of the tornados. Twenty-three (85%) of these 27 patients experienced spine and spinal cord injuries. Four patients (15%) suffered intracranial injuries and 2 patients (7%) suffered combined intracranial and spinal injuries. The spinal fractures that were evaluated and treated were predominantly thoracic (43.5%) and lumbar (43.5%). The neurosurgery service performed 14 spinal fusions, 1 ventriculostomy, 2 halo placements, 1 diagnostic angiogram, 1 endovascular embolectomy, and 1 wound debridement and lavage. Twenty-two patients (81.5%) were neurologically intact at discharge and all but 4 had 1 year of follow-up. Three patients had persistent deficits from spinal cord injuries and there was 1 death in a patient with multisystem injuries in whom no procedures were performed. Two patients experienced postoperative complications in the form of 1 wound infection and 1 stroke. The April 27, 2011, tornados in Alabama produced significant neurosurgical injuries that primarily involved the spine. There were a disproportionate number of patients with thoracolumbar fractures, a finding possibly due to the county medical examiner's postmortem findings that demonstrated a high prevalence of fatal cervical spine and traumatic brain injuries. The UAB experience can be used to aid other institutions in preparing for the appropriate allotment of resources in the event of a similar natural disaster.
Final report : UAB transportation workforce development.
DOT National Transportation Integrated Search
2014-06-01
Transportation engineering supports safe and efficient movement of people and goods through : planning, design, operation and management of transportation systems. As needs for : transportation continue to grow, the future needs for qualified transpo...
Exercises to Improve Gait Abnormalities
... are so characteristic that they have been given descriptive names: Propulsive gait; a stooped, rigid posture, with ... you! Champion's Rx Inclusive Fitness Coalition UAB/Lakeshore Research Collaborative Facebook Twitter YouTube The information provided in ...
... 7th Avenue S. Birmingham, AL 35233 (205) 939-5281 https://www.childrensal.org/SpinaBifidaProgram UAB Spain Rehab Adult ... Ave S Birmingham, AL 35249 Phone: (205) 934-4131 http://www.uabmedicine.org/locations/spain-rehabilitation-center Children’s ...
UAB HRFD Core Center: Core A: The Hepato/Renal Fibrocystic Diseases Translational Resource
2017-09-15
Hepato/Renal Fibrocystic Disease; Autosomal Recessive Polycystic Kidney Disease; Joubert Syndrome; Bardet Biedl Syndrome; Meckel-Gruber Syndrome; Congenital Hepatic Fibrosis; Caroli Syndrome; Oro-Facial-Digital Syndrome Type I; Nephronophthisis; Glomerulocystic Kidney Disease
Streamlining air quality models in Alabama
DOT National Transportation Integrated Search
2004-07-01
This report documents a research project sponsored by the Alabama Department of Transportation (ALDOT) and conducted by the University of Central Florida (UCF) and the University of Alabama at Birmingham (UAB) to develop a user-friendly, Windows vers...
Challenges and Opportunities in Interdisciplinary Materials Research Experiences for Undergraduates
NASA Astrophysics Data System (ADS)
Vohra, Yogesh; Nordlund, Thomas
2009-03-01
The University of Alabama at Birmingham (UAB) offer a broad range of interdisciplinary materials research experiences to undergraduate students with diverse backgrounds in physics, chemistry, applied mathematics, and engineering. The research projects offered cover a broad range of topics including high pressure physics, microelectronic materials, nano-materials, laser materials, bioceramics and biopolymers, cell-biomaterials interactions, planetary materials, and computer simulation of materials. The students welcome the opportunity to work with an interdisciplinary team of basic science, engineering, and biomedical faculty but the challenge is in learning the key vocabulary for interdisciplinary collaborations, experimental tools, and working in an independent capacity. The career development workshops dealing with the graduate school application process and the entrepreneurial business activities were found to be most effective. The interdisciplinary university wide poster session helped student broaden their horizons in research careers. The synergy of the REU program with other concurrently running high school summer programs on UAB campus will also be discussed.
The UAB Informatics Institute and 2016 CEGS N-GRID de-identification shared task challenge.
Bui, Duy Duc An; Wyatt, Mathew; Cimino, James J
2017-11-01
Clinical narratives (the text notes found in patients' medical records) are important information sources for secondary use in research. However, in order to protect patient privacy, they must be de-identified prior to use. Manual de-identification is considered to be the gold standard approach but is tedious, expensive, slow, and impractical for use with large-scale clinical data. Automated or semi-automated de-identification using computer algorithms is a potentially promising alternative. The Informatics Institute of the University of Alabama at Birmingham is applying de-identification to clinical data drawn from the UAB hospital's electronic medical records system before releasing them for research. We participated in a shared task challenge by the Centers of Excellence in Genomic Science (CEGS) Neuropsychiatric Genome-Scale and RDoC Individualized Domains (N-GRID) at the de-identification regular track to gain experience developing our own automatic de-identification tool. We focused on the popular and successful methods from previous challenges: rule-based, dictionary-matching, and machine-learning approaches. We also explored new techniques such as disambiguation rules, term ambiguity measurement, and used multi-pass sieve framework at a micro level. For the challenge's primary measure (strict entity), our submissions achieved competitive results (f-measures: 87.3%, 87.1%, and 86.7%). For our preferred measure (binary token HIPAA), our submissions achieved superior results (f-measures: 93.7%, 93.6%, and 93%). With those encouraging results, we gain the confidence to improve and use the tool for the real de-identification task at the UAB Informatics Institute. Copyright © 2017 Elsevier Inc. All rights reserved.
Prevalence of attention-deficit/hyperactivity disorder among children with vision impairment
DeCarlo, Dawn K.; Bowman, Ellen; Monroe, Cara; Kline, Robert; McGwin, Gerald; Owsley, Cynthia
2014-01-01
Purpose To evaluate the prevalence of parent-reported attention-deficit/hyperactivity disorder (ADHD) in two clinics in Alabama serving children with vision impairment. Methods The medical records of children 4–17 years of age attending the Alabama School for the Blind (ASB) during the 2010–2011 school year or seen at the University of Alabama at Birmingham (UAB) Center for Low Vision Rehabilitation between 2006 and 2010 were retrospectively reviewed. Sociodemographics, ocular characteristics, and parental report of ADHD diagnosis were obtained. The prevalence of ADHD was compared to national and state figures for age-similar children regardless of comorbidities. The prevalence of ADHD, sociodemographic, and ocular characteristics was also compared between clinical sites. Results A total of 264 children participated in the study (95 from ASB and 169 from UAB). The prevalence of ADHD among children with visual acuity better than hand motion (n = 245) was 22.9%, which is higher than reported state (14.3%) and national prevalence (9.5%) for children in this age range. The prevalence was similar at ASB (22.4%) and UAB (23.1%). Those with ADHD were similar to those without ADHD with respect to age, sex, and race. Children with ADHD were significantly less likely to have nystagmus and more likely to have better visual acuity (P < 0.05). The prevalence of ADHD among the 19 participants with total or near total vision loss (all from ASB) was 10.5%. Conclusions Our analyses suggest that children with vision impairment may be more likely to be diagnosed with ADHD than children in the general population. PMID:24568975
Arab Hassani, Faezeh; Mogan, Roshini P; Gammad, Gil G L; Wang, Hao; Yen, Shih-Cheng; Thakor, Nitish V; Lee, Chengkuo
2018-04-24
Aging, neurologic diseases, and diabetes are a few risk factors that may lead to underactive bladder (UAB) syndrome. Despite all of the serious consequences of UAB, current solutions, the most common being ureteric catheterization, are all accompanied by serious shortcomings. The necessity of multiple catheterizations per day for a physically able patient not only reduces the quality of life with constant discomfort and pain but also can end up causing serious complications. Here, we present a bistable actuator to empty the bladder by incorporating shape memory alloy components integrated on flexible polyvinyl chloride sheets. The introduction of two compression and restoration phases for the actuator allows for repeated actuation for a more complete voiding of the bladder. The proposed actuator exhibits one of the highest reported voiding percentages of up to 78% of the bladder volume in an anesthetized rat after only 20 s of actuation. This amount of voiding is comparable to the common catheterization method, and its one time implantation onto the bladder rectifies the drawbacks of multiple catheterizations per day. Furthermore, the scaling of the device for animal models larger than rats can be easily achieved by adjusting the number of nitinol springs. For neurogenic UAB patients with degraded nerve function as well as degenerated detrusor muscle, we integrate a flexible triboelectric nanogenerator sensor with the actuator to detect the fullness of the bladder. The sensitivity of this sensor to the filling status of the bladder shows its capability for defining a self-control system in the future that would allow autonomous micturition.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-24
... at 12:00 Midnight ET (2400) on January 13, 2012. ADDRESSES: Both directories can be accessed...:00 Midnight ET (2400) on January 13, 2012. More information on PSOs can be obtained through AHRQ's...
UAB UTC transit related activities
DOT National Transportation Integrated Search
2011-01-25
The ITS Strategic Plan outlines a strategy for improving the efficiency of the Region's existing highway and transit systems. The Region's overall goal is to improve the efficiency and effectiveness of existing systems so as to reduce the need to bui...
Gonçalves, Hernâni; Pinto, Paula; Silva, Manuela; Ayres-de-Campos, Diogo; Bernardes, João
2016-04-01
Fetal heart rate (FHR) monitoring is used routinely in labor, but conventional methods have a limited capacity to detect fetal hypoxia/acidosis. An exploratory study was performed on the simultaneous assessment of maternal heart rate (MHR) and FHR variability, to evaluate their evolution during labor and their capacity to detect newborn acidemia. MHR and FHR were simultaneously recorded in 51 singleton term pregnancies during the last two hours of labor and compared with newborn umbilical artery blood (UAB) pH. Linear/nonlinear indices were computed separately for MHR and FHR. Interaction between MHR and FHR was quantified through the same indices on FHR-MHR and through their correlation and cross-entropy. Univariate and bivariate statistical analysis included nonparametric confidence intervals and statistical tests, receiver operating characteristic curves and linear discriminant analysis. Progression of labor was associated with a significant increase in most MHR and FHR linear indices, whereas entropy indices decreased. FHR alone and in combination with MHR as FHR-MHR evidenced the highest auROC values for prediction of fetal acidemia, with 0.76 and 0.88 for the UAB pH thresholds 7.20 and 7.15, respectively. The inclusion of MHR on bivariate analysis achieved sensitivity and specificity values of nearly 100 and 89.1%, respectively. These results suggest that simultaneous analysis of MHR and FHR may improve the identification of fetal acidemia compared with FHR alone, namely during the last hour of labor.
Buys, David R.
2013-01-01
Objectives: Aging adults face an increased risk of adverse health events as well as risk for a decrease in personal competencies across multiple domains. These factors may inhibit the ability of an older adult to age in place and may result in a nursing home admission (NHA). This study combines insights from Lawton’s environmental press theory with the neighborhood disadvantage (ND) literature to examine the interaction of the neighborhood environment and individual characteristics on NHA. Methods: Characteristics associated with the likelihood of NHA for community-dwelling older adults were examined using data collected for 8.5 years from the UAB Study of Aging. Logistic regression models were used to test direct effects of ND on NHA for all participants. The sample was then stratified into 3 tiers of ND to examine differences in individual-level factors by level of ND. Results: There was no direct link between living in a disadvantaged neighborhood environment and likelihood of NHA, but physical impairment was associated with NHA for older adults living highly disadvantaged neighborhood environments in contrast to older adults living in less disadvantaged neighborhood environments, where no association was observed. Discussion: These outcomes highlight (a) the usefulness of linking Lawton’s theories of the environment with the ND literature to assess health-related outcomes and (b) the importance of neighborhood environment for older adults’ ability to age in place. PMID:23034471
The UAB University Transportation Center update : Vol. 4, No. 1.
DOT National Transportation Integrated Search
2010-01-01
Dr. Russ Fine, Dr. Despina Stavrinos and Ms. Crystal Franklin were invited by US DOT Secretary Ray LaHood to attend the 2010 Distracted Driving Summit held in Washington, DC in September. The Summit brought together leading transportation officials, ...
Construction from the Inside Out: UAB's Successful In-House Organization.
ERIC Educational Resources Information Center
Baker, Brooks H. III; Hammonds, Hope Duncan
2001-01-01
Shares some of the success stories of the University of Alabama's Design Build Services Department and the keys to achieving that success. The evolution of the school's in-house construction management is detailed along with the department's history. (GR)
Liposome-Encapsulated Bacteriophages for Enhanced Oral Phage Therapy against Salmonella spp.
Colom, Joan; Cano-Sarabia, Mary; Otero, Jennifer; Cortés, Pilar; Maspoch, Daniel; Llagostera, Montserrat
2015-07-01
Bacteriophages UAB_Phi20, UAB_Phi78, and UAB_Phi87 were encapsulated in liposomes, and their efficacy in reducing Salmonella in poultry was then studied. The encapsulated phages had a mean diameter of 309 to 326 nm and a positive charge between +31.6 and +35.1 mV (pH 6.1). In simulated gastric fluid (pH 2.8), the titer of nonencapsulated phages decreased by 5.7 to 7.8 log units, whereas encapsulated phages were significantly more stable, with losses of 3.7 to 5.4 log units. The liposome coating also improved the retention of bacteriophages in the chicken intestinal tract. When cocktails of the encapsulated and nonencapsulated phages were administered to broilers, after 72 h the encapsulated phages were detected in 38.1% of the animals, whereas the nonencapsulated phages were present in only 9.5%. The difference was significant. In addition, in an in vitro experiment, the cecal contents of broilers promoted the release of the phages from the liposomes. In broilers experimentally infected with Salmonella, the daily administration of the two cocktails for 6 days postinfection conferred similar levels of protection against Salmonella colonization. However, once treatment was stopped, protection by the nonencapsulated phages disappeared, whereas that provided by the encapsulated phages persisted for at least 1 week, showing the enhanced efficacy of the encapsulated phages in protecting poultry against Salmonella over time. The methodology described here allows the liposome encapsulation of phages of different morphologies. The preparations can be stored for at least 3 months at 4°C and could be added to the drinking water and feed of animals. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Liposome-Encapsulated Bacteriophages for Enhanced Oral Phage Therapy against Salmonella spp.
Colom, Joan; Cano-Sarabia, Mary; Otero, Jennifer; Cortés, Pilar
2015-01-01
Bacteriophages UAB_Phi20, UAB_Phi78, and UAB_Phi87 were encapsulated in liposomes, and their efficacy in reducing Salmonella in poultry was then studied. The encapsulated phages had a mean diameter of 309 to 326 nm and a positive charge between +31.6 and +35.1 mV (pH 6.1). In simulated gastric fluid (pH 2.8), the titer of nonencapsulated phages decreased by 5.7 to 7.8 log units, whereas encapsulated phages were significantly more stable, with losses of 3.7 to 5.4 log units. The liposome coating also improved the retention of bacteriophages in the chicken intestinal tract. When cocktails of the encapsulated and nonencapsulated phages were administered to broilers, after 72 h the encapsulated phages were detected in 38.1% of the animals, whereas the nonencapsulated phages were present in only 9.5%. The difference was significant. In addition, in an in vitro experiment, the cecal contents of broilers promoted the release of the phages from the liposomes. In broilers experimentally infected with Salmonella, the daily administration of the two cocktails for 6 days postinfection conferred similar levels of protection against Salmonella colonization. However, once treatment was stopped, protection by the nonencapsulated phages disappeared, whereas that provided by the encapsulated phages persisted for at least 1 week, showing the enhanced efficacy of the encapsulated phages in protecting poultry against Salmonella over time. The methodology described here allows the liposome encapsulation of phages of different morphologies. The preparations can be stored for at least 3 months at 4°C and could be added to the drinking water and feed of animals. PMID:25956778
Use of a bacteriophage cocktail to control Salmonella in food and the food industry.
Spricigo, Denis Augusto; Bardina, Carlota; Cortés, Pilar; Llagostera, Montserrat
2013-07-15
The use of lytic bacteriophages for the biocontrol of food-borne pathogens in food and in the food industry is gaining increasing acceptance. In this study, the effectiveness of a bacteriophage cocktail composed of three different lytic bacteriophages (UAB_Phi 20, UAB_Phi78, and UAB_Phi87) was determined in four different food matrices (pig skin, chicken breasts, fresh eggs, and packaged lettuce) experimentally contaminated with Salmonella enterica serovar Typhimurium and S. enterica serovar Enteritidis. A significant bacterial reduction (>4 and 2 log/cm(2) for S. Typhimurium and S. Enteritidis, respectively; p≤0.005) was obtained in pig skin sprayed with the bacteriophage cocktail and then incubated at 33 °C for 6h. Significant decreases in the concentration of S. Typhimurium and S. Enteritidis were also measured in chicken breasts dipped for 5 min in a solution containing the bacteriophage cocktail and then refrigerated at 4 °C for 7 days (2.2 and 0.9 log10 cfu/g, respectively; p≤0.0001) as well as in lettuce similarly treated for 60 min at room temperature (3.9 and 2.2 log10 cfu/g, respectively; p≤0.005). However, only a minor reduction of the bacterial concentration (0.9 log10 cfu/cm(2) of S. Enteritidis and S. Typhimurium; p≤0.005) was achieved in fresh eggs sprayed with the bacteriophage cocktail and then incubated at 25 °C for 2 h. These results show the potential effectiveness of this bacteriophage cocktail as a biocontrol agent of Salmonella in several food matrices under conditions similar to those used in their production. Copyright © 2013 Elsevier B.V. All rights reserved.
Local evaluation report : state of Alabama (2001), automated crash notification system, UAB
DOT National Transportation Integrated Search
2005-01-01
This project is the pilot phase of a longer-term project to integrate ACN and AACN technology into a comprehensive trauma system. Such a system exists in Alabamas central Birmingham Regional EMS System (BREMSS). The project involves three tasks: t...
Advancing geriatric education: development of an interprofessional program for health care faculty.
Ford, Channing R; Brown, Cynthia J; Sawyer, Patricia; Rothrock, Angela G; Ritchie, Christine S
2015-01-01
To improve the health care of older adults, a faculty development program was created to enhance geriatric knowledge. The University of Alabama at Birmingham (UAB) Geriatric Education Center leadership instituted a one-year, 36-hour curriculum focusing on older adults with complex health care needs. Content areas were chosen from the Institute of Medicine Transforming Health Care Quality report and a local needs assessment. Potential preceptors were identified and participant recruitment efforts began by contacting UAB department chairs of health care disciplines. This article describes the development of the program and its implementation over three cohorts of faculty scholars (n = 41) representing 13 disciplines, from nine institutions of higher learning. Formative and summative evaluation showed program success in terms of positive faculty reports of the program, information gained, and expressed intent by each scholar to apply learned content to teaching and/or clinical practice. This article describes the initial framework and strategies guiding the development of a thriving interprofessional geriatric education program.
USDA-ARS?s Scientific Manuscript database
Recommended rates of glyphosate for non-cultivated areas destroy the aboveground shoots of the perennial plant leafy spurge. However, such applications cause little or no damage to underground adventitious buds (UABs), and thus the plant readily regenerates vegetatively. High concentrations of glyph...
ERIC Educational Resources Information Center
Rushton, Lia
2017-01-01
When I was appointed fellowships advisor at UAB back in the late 1990s and before the formation of the National Association of Fellowships Advisors, as a first order of business I spoke with the university's few former winners and finalists about their experiences applying for nationally competitive scholarships. One such former applicant, now an…
Identifying Immune Drivers of Gulf War Illness Using a Novel Daily Sampling Approach
2015-10-01
rescheduled to allow time to complete data collection from the 35 participants that will be enrolled at UAB). Task 2: Submission of Documents for...collection During the 25-day immune monitoring phase, blood was collected by trained phlebotomists or research nurses at Parkitny et al. BMC Immunology
DOT National Transportation Integrated Search
2010-09-01
Traffic congestion is a primary concern during major incident and evacuation scenarios and can create difficulties for emergency vehicles attempting to enter and exit affected areas; however, many of the dispatchers who would be responsible for direc...
ERIC Educational Resources Information Center
Pereira, Alda; Oliveira, Isolina; Tinoca, Luis; Amante, Lucia; Relvas, Maria de Jesus; Pinto, Maria do Carmo Teixeira; Moreira, Darlinda
2009-01-01
Universidade Aberta (UAb), being a distance teaching university, is particularly suited to tackle some of the recommendations concerning the assurance of accessibility to education expressed in the "Education and Training 2010" goals. Meanwhile, the University's strategic plan for 2006/2010 decided to implement a fully virtual innovative…
2012-08-01
Investigator 15 UAB X1219: Molecular determinants of cellular susceptibility to PARP inhibition in an ex- vivo model of human cholangiocarcinoma Role...cellular susceptibility to PARP inhibition in an ex-vivo model of human cholangiocarcinoma Role: Co-Prinicipal Investigator Career Development
USDA-ARS?s Scientific Manuscript database
Leafy spurge (Euphorbia esula L.) is an herbaceous perennial weed that reproduces vegetatively from an abundance of underground adventitious buds (UABs), which undergo well-defined phases of seasonal dormancy (para-, endo- and eco-dormancy). In this study, the effects of dehydration-stress on vegeta...
A clinical study of patient acceptance and satisfaction of Varilux Plus and Varilux Infinity lenses.
Cho, M H; Barnette, C B; Aiken, B; Shipp, M
1991-06-01
An independent study was conducted at the UAB School of Optometry to determine the clinical success with Varilux Plus (Varilux 2) and Varilux Infinity progressive addition lenses (PAL). Two hundred eighty patients (280) were fit between June 1988 and May 1989. The acceptance rate of 97.5 percent was based on the number of lenses ordered versus the number of lenses returned. Patients were contacted by telephone and asked to rate their level of satisfaction with their PALs. A chi-square (non-parametric) test revealed no statistically significant differences in levels of satisfaction with respect to gender, PAL type, or degree of presbyopia. Also, neither refractive error nor previous lens history had a measurable impact on patient satisfaction.
Life-Space Mobility Change Predicts 6-Month Mortality.
Kennedy, Richard E; Sawyer, Patricia; Williams, Courtney P; Lo, Alexander X; Ritchie, Christine S; Roth, David L; Allman, Richard M; Brown, Cynthia J
2017-04-01
To examine 6-month change in life-space mobility as a predictor of subsequent 6-month mortality in community-dwelling older adults. Prospective cohort study. Community-dwelling older adults from five Alabama counties in the University of Alabama at Birmingham (UAB) Study of Aging. A random sample of 1,000 Medicare beneficiaries, stratified according to sex, race, and rural or urban residence, recruited between November 1999 and February 2001, followed by a telephone interview every 6 months for the subsequent 8.5 years. Mortality data were determined from informant contacts and confirmed using the National Death Index and Social Security Death Index. Life-space was measured at each interview using the UAB Life-Space Assessment, a validated instrument for assessing community mobility. Eleven thousand eight hundred seventeen 6-month life-space change scores were calculated over 8.5 years of follow-up. Generalized linear mixed models were used to test predictors of mortality at subsequent 6-month intervals. Three hundred fifty-four deaths occurred within 6 months of two sequential life-space assessments. Controlling for age, sex, race, rural or urban residence, and comorbidity, life-space score and life-space decline over the preceding 6-month interval predicted mortality. A 10-point decrease in life-space resulted in a 72% increase in odds of dying over the subsequent 6 months (odds ratio = 1.723, P < .001). Life-space score at the beginning of a 6-month interval and change in life-space over 6 months were each associated with significant differences in subsequent 6-month mortality. Life-space assessment may assist clinicians in identifying older adults at risk of short-term mortality. © 2017, Copyright the Authors Journal compilation © 2017, The American Geriatrics Society.
Reflections on Finally Becoming a Professor after Forty Years
ERIC Educational Resources Information Center
Watkins, J. Foster
2016-01-01
I wrote this reflective piece in 1999 as I was assuming my first full-time position as a professor with limited administrative responsibilities at the University of Alabama at Birmingham (UAB). After 30-plus years in administrative roles in higher education that provided the opportunity to teach on a part-time basis only, I quickly became aware of…
Constructing a modern cytology laboratory: A toolkit for planning and design.
Roberson, Janie; Wrenn, Allison; Poole, John; Jaeger, Andrew; Eltoum, Isam A
2013-01-01
Constructing or renovating a laboratory can be both challenging and rewarding. UAB Cytology (UAB CY) recently undertook a project to relocate from a building constructed in 1928 to new space. UAB CY is part of an academic center that provides service to a large set of patients, support training of one cytotechnology program and one cytopathology fellowship training program and involve actively in research and scholarly activity. Our objectives were to provide a safe, aesthetically pleasing space and gain efficiencies through lean processes. The phases of any laboratory design project are Planning, Schematic Design (SD), Design Development (DD), Construction Documents (CD) and Construction. Lab personnel are most critical in the Planning phase. During this time stakeholders, relationships, budget, square footage and equipment were identified. Equipment lists, including what would be relocated, purchased new and projected for future growth ensure that utilities were matched to expected need. A chemical inventory was prepared and adequate storage space was planned. Regulatory and safety requirements were discussed. Tours and high level process flow diagrams helped architects and engineers understand the laboratory daily work. Future needs were addressed through a questionnaire which identified potential areas of growth and technological change. Throughout the project, decisions were driven by data from the planning phase. During the SD phase, objective information from the first phase was used by architects and planners to create a general floor plan. This was the basis of a series of meetings to brainstorm and suggest modifications. DD brings more detail to the plans with engineering, casework, equipment specifics, finishes. Design changes should be completed at this phase. The next phase, CD took the project from the lab purview into purely technical mode. Construction documents were used by the contractor for the bidding process and ultimately the Construction phase. The project fitted out a total of 9,000 square feet; 4,000 laboratory and 5,000 office/support. Lab space includes areas for Prep, CT screening, sign out and Imaging. Adjacent space houses faculty offices and conferencing facilities. Transportation time was reduced (waste removal) by a Pneumatic Tube System, specimen drop window to Prep Lab and a pass thru window to the screening area. Open screening and prep areas allow visual management control. Efficiencies were gained by ergonomically placing CT Manual and Imaging microscopes and computers in close proximity, also facilitating a paperless workflow for additional savings. Logistically, closer proximity to Surgical Pathology maximized the natural synergies between the areas. Lab construction should be a systematic process based on sound principles for safety, high quality testing, and finance. Our detailed planning and design process can be a model for others undertaking similar projects.
Constructing a modern cytology laboratory: A toolkit for planning and design
Roberson, Janie; Wrenn, Allison; Poole, John; Jaeger, Andrew; Eltoum, Isam A.
2013-01-01
Introduction: Constructing or renovating a laboratory can be both challenging and rewarding. UAB Cytology (UAB CY) recently undertook a project to relocate from a building constructed in 1928 to new space. UAB CY is part of an academic center that provides service to a large set of patients, support training of one cytotechnology program and one cytopathology fellowship training program and involve actively in research and scholarly activity. Our objectives were to provide a safe, aesthetically pleasing space and gain efficiencies through lean processes. Methods: The phases of any laboratory design project are Planning, Schematic Design (SD), Design Development (DD), Construction Documents (CD) and Construction. Lab personnel are most critical in the Planning phase. During this time stakeholders, relationships, budget, square footage and equipment were identified. Equipment lists, including what would be relocated, purchased new and projected for future growth ensure that utilities were matched to expected need. A chemical inventory was prepared and adequate storage space was planned. Regulatory and safety requirements were discussed. Tours and high level process flow diagrams helped architects and engineers understand the laboratory daily work. Future needs were addressed through a questionnaire which identified potential areas of growth and technological change. Throughout the project, decisions were driven by data from the planning phase. During the SD phase, objective information from the first phase was used by architects and planners to create a general floor plan. This was the basis of a series of meetings to brainstorm and suggest modifications. DD brings more detail to the plans with engineering, casework, equipment specifics, finishes. Design changes should be completed at this phase. The next phase, CD took the project from the lab purview into purely technical mode. Construction documents were used by the contractor for the bidding process and ultimately the Construction phase. Results: The project fitted out a total of 9,000 square feet; 4,000 laboratory and 5,000 office/support. Lab space includes areas for Prep, CT screening, sign out and Imaging. Adjacent space houses faculty offices and conferencing facilities. Transportation time was reduced (waste removal) by a Pneumatic Tube System, specimen drop window to Prep Lab and a pass thru window to the screening area. Open screening and prep areas allow visual management control. Efficiencies were gained by ergonomically placing CT Manual and Imaging microscopes and computers in close proximity, also facilitating a paperless workflow for additional savings. Logistically, closer proximity to Surgical Pathology maximized the natural synergies between the areas. Conclusions: Lab construction should be a systematic process based on sound principles for safety, high quality testing, and finance. Our detailed planning and design process can be a model for others undertaking similar projects PMID:23599722
ERIC Educational Resources Information Center
Allman, Richard M.; Sawyer, Patricia; Crowther, Martha; Strothers, Harry S., III; Turner, Timothy; Fouad, Mona N.
2011-01-01
Purpose: To identify racial/ethnic differences in retention of older adults at 3 levels of participation in a prospective observational study: telephone, in-home assessments, and home visits followed by blood draws. Design and Methods: A prospective study of 1,000 community-dwelling Medicare beneficiaries aged 65 years and older included a…
ERIC Educational Resources Information Center
Da Cruz Duran, Maria Renata; Da Costa, Celso José; Amiel, Tel
2014-01-01
Since June 2011, research on the Open University System of Brazil's (UAB's) official evaluation processes relating to learner support facilities has been carried out by the Teachers' Training, New Information, Communication and Technologies research group, which is linked to the Laboratory of New Technologies for Teaching at Fluminense Federal…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koperna, George J.; Pashin, Jack; Walsh, Peter
The Commercial Scale Project is a US DOE/NETL funded initiative aimed at enhancing the knowledge-base and industry’s ability to geologically store vast quantities of anthropogenic carbon. In support of this goal, a large-scale, stacked reservoir geologic model was developed for Gulf Coast sediments centered on the Citronelle Dome in southwest Alabama, the site of the SECARB Phase III Anthropogenic Test. Characterization of regional geology to construct the model consists of an assessment of the entire stratigraphic continuum at Citronelle Dome, from surface to the depth of the Donovan oil-bearing formation. This project utilizes all available geologic data available, which includes:more » modern geophysical well logs from three new wells drilled for SECARB’s Anthropogenic Test; vintage logs from the Citronelle oilfield wells; porosity and permeability data from whole core and sidewall cores obtained from the injection and observation wells drilled for the Anthropogenic Test; core data obtained from the SECARB Phase II saline aquifer injection test; regional core data for relevant formations from the Geological Survey of Alabama archives. Cross sections, isopach maps, and structure maps were developed to validate the geometry and architecture of the Citronelle Dome for building the model, and assuring that no major structural defects exist in the area. A synthetic neural network approach was used to predict porosity using the available SP and resistivity log data for the storage reservoir formations. These data are validated and applied to extrapolate porosity data over the study area wells, and to interpolate permeability amongst these data points. Geostatistical assessments were conducted over the study area. In addition to geologic characterization of the region, a suite of core analyses was conducted to construct a depositional model and constrain caprock integrity. Petrographic assessment of core was conducted by OSU and analyzed to build a depositional framework for the region and provide modern day analogues. Stability of the caprock over several test parameters was conducted by UAB to yield comprehensive measurements on long term stability of caprocks. The detailed geologic model of the full earth volume from surface thru the Donovan oil reservoir is incorporated into a state-of-the-art reservoir simulation conducted by the University of Alabama at Birmingham (UAB) to explore optimization of CO 2 injection and storage under different characterizations of reservoir flow properties. The application of a scaled up geologic modeling and reservoir simulation provides a proof of concept for the large scale volumetric modeling of CO 2 injection and storage the subsurface.« less
NASA Astrophysics Data System (ADS)
Casas, Ricard; Cardiel-Sas, Laia; Castander, Francisco J.; Jiménez, Jorge; de Vicente, Juan
2014-08-01
The focal plane of the PAU camera is composed of eighteen 2K x 4K CCDs. These devices, plus four spares, were provided by the Japanese company Hamamatsu Photonics K.K. with type no. S10892-04(X). These detectors are 200 μm thick fully depleted and back illuminated with an n-type silicon base. They have been built with a specific coating to be sensitive in the range from 300 to 1,100 nm. Their square pixel size is 15 μm. The read-out system consists of a Monsoon controller (NOAO) and the panVIEW software package. The deafualt CCD read-out speed is 133 kpixel/s. This is the value used in the calibration process. Before installing these devices in the camera focal plane, they were characterized using the facilities of the ICE (CSIC- IEEC) and IFAE in the UAB Campus in Bellaterra (Barcelona, Catalonia, Spain). The basic tests performed for all CCDs were to obtain the photon transfer curve (PTC), the charge transfer efficiency (CTE) using X-rays and the EPER method, linearity, read-out noise, dark current, persistence, cosmetics and quantum efficiency. The X-rays images were also used for the analysis of the charge diffusion for different substrate voltages (VSUB). Regarding the cosmetics, and in addition to white and dark pixels, some patterns were also found. The first one, which appears in all devices, is the presence of half circles in the external edges. The origin of this pattern can be related to the assembly process. A second one appears in the dark images, and shows bright arcs connecting corners along the vertical axis of the CCD. This feature appears in all CCDs exactly in the same position so our guess is that the pattern is due to electrical fields. Finally, and just in two devices, there is a spot with wavelength dependence whose origin could be the result of a defectous coating process.
USDA-ARS?s Scientific Manuscript database
Leafy spurge (Euphorbia esula) is an invasive weed of North America and its perennial nature is attributed to underground adventitious buds (UABs) that undergo seasonal cycles of para-, endo- and eco-dormancy. Recommended field rates of glyphosate (~1 kg/ha) destroys above-ground shoots of leafy spu...
Testing Bonner sphere spectrometers in the JRC-IRMM mono-energetic neutron beams
NASA Astrophysics Data System (ADS)
Bedogni, R.; Domingo, C.; Esposito, A.; Chiti, M.; García-Fusté, M. J.; Lovestam, G.
2010-08-01
Within the framework of the Euratom Transnational Access programme, a specific sub-programme, called NUDAME (neutron data measurements at IRMM), was dedicated to support neutron measurement experiments at the accelerator-based facilities of the JRC-IRMM Geel, Belgium. In this context, the INFN-LNF and UAB groups undertook two separate experiments at the 7 MV Van de Graaff facility, aimed at testing their Bonner sphere spectrometers (BSS) with mono-energetic neutron beams. Both research groups routinely employ the BSS in neutron spectra measurements for radiation protection dosimetry purposes, where accurate knowledge of the BSS response is a mandatory condition for correct dose evaluations. This paper presents the results obtained by both groups, focusing on: (1) the comparison between the value of neutron fluence provided as reference data and that obtained by applying the FRUIT unfolding code to the measured BSS data and (2) the experimental validation of the response matrices of the BSSs, previously derived with Monte Carlo simulations.
Project definition study for the National Biomedical Tracer Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roozen, K.
The University of Alabama at Birmingham (UAB) has conducted a study of the proposed National Biomedical Tracer Facility (NBTF). In collaboration with General Atomics, RUST International, Coleman Research Corporation (CRC), IsoMed, Ernst and Young and the advisory committees, they have examined the issues relevant to the NBTF in terms of facility design, operating philosophy, and a business plan. They have utilized resources within UAB, CRC and Chem-Nuclear to develop recommendations on environmental, safety and health issues. The Institute of Medicine Panel`s Report on Isotopes for Medicine and the Life Sciences took the results of prior workshops further in developing recommendationsmore » for the mission of the NBTF. The IOM panel recommends that the NBTF accelerator have the capacity to accelerate protons to 80 MeV and a minimum of 750 microamperes of current. The panel declined to recommend a cyclotron or a linac. They emphasized a clear focus on research and development for isotope production including target design, separation chemistry and generator development. The facility needs to emphasize education and training in its mission. The facility must focus on radionuclide production for the research and clinical communities. The formation of a public-private partnership resembling the TRIUMF-Nordion model was encouraged. An advisory panel should assist with the NBTF operations and prioritization.« less
Jiménez-Xarrié, Elena; Davila, Myriam; Candiota, Ana Paula; Delgado-Mederos, Raquel; Ortega-Martorell, Sandra; Julià-Sapé, Margarida; Arús, Carles; Martí-Fàbregas, Joan
2017-01-13
Magnetic resonance spectroscopy (MRS) provides non-invasive information about the metabolic pattern of the brain parenchyma in vivo. The SpectraClassifier software performs MRS pattern-recognition by determining the spectral features (metabolites) which can be used objectively to classify spectra. Our aim was to develop an Infarct Evolution Classifier and a Brain Regions Classifier in a rat model of focal ischemic stroke using SpectraClassifier. A total of 164 single-voxel proton spectra obtained with a 7 Tesla magnet at an echo time of 12 ms from non-infarcted parenchyma, subventricular zones and infarcted parenchyma were analyzed with SpectraClassifier ( http://gabrmn.uab.es/?q=sc ). The spectra corresponded to Sprague-Dawley rats (healthy rats, n = 7) and stroke rats at day 1 post-stroke (acute phase, n = 6 rats) and at days 7 ± 1 post-stroke (subacute phase, n = 14). In the Infarct Evolution Classifier, spectral features contributed by lactate + mobile lipids (1.33 ppm), total creatine (3.05 ppm) and mobile lipids (0.85 ppm) distinguished among non-infarcted parenchyma (100% sensitivity and 100% specificity), acute phase of infarct (100% sensitivity and 95% specificity) and subacute phase of infarct (78% sensitivity and 100% specificity). In the Brain Regions Classifier, spectral features contributed by myoinositol (3.62 ppm) and total creatine (3.04/3.05 ppm) distinguished among infarcted parenchyma (100% sensitivity and 98% specificity), non-infarcted parenchyma (84% sensitivity and 84% specificity) and subventricular zones (76% sensitivity and 93% specificity). SpectraClassifier identified candidate biomarkers for infarct evolution (mobile lipids accumulation) and different brain regions (myoinositol content).
TrSDB: a proteome database of transcription factors
Hermoso, Antoni; Aguilar, Daniel; Aviles, Francesc X.; Querol, Enrique
2004-01-01
TrSDB—TranScout Database—(http://ibb.uab.es/trsdb) is a proteome database of eukaryotic transcription factors based upon predicted motifs by TranScout and data sources such as InterPro and Gene Ontology Annotation. Nine eukaryotic proteomes are included in the current version. Extensive and diverse information for each database entry, different analyses considering TranScout classification and similarity relationships are offered for research on transcription factors or gene expression. PMID:14681387
Environmental sacredness and health in Palau.
Kuartei, Steven
2005-03-01
The migration from Africa to the Pacific would take many millennia with ever changing environment conditions including the physical, social, spiritual and economics. Evolutionary metamorphosis from Neanderthals to Homo sapiens, through the Stone Age and Ice Age, the journey continued in sacred milieu that would protect this predestined journey out of the Garden of Eden. On the arrival to the final destination, a sacred gift called Uab (Palau), life would be guided with sacredness of the land, the sea, the skies and operational structures of a society that would survive through the test of time and conditions. This paper will examine how such sacredness is violated and how that has led to the erosion, exploitation and prostitution of the environment or lukel a klengar (nest of life). It will explore what it would take to reclaim some of the sacredness lost. The premise is that sacredness of Palau (Chedolel Belau) lost would mean a society lost.
PopHuman: the human population genomics browser
Mulet, Roger; Villegas-Mirón, Pablo; Hervas, Sergi; Sanz, Esteve; Velasco, Daniel; Bertranpetit, Jaume; Laayouni, Hafid
2018-01-01
Abstract The 1000 Genomes Project (1000GP) represents the most comprehensive world-wide nucleotide variation data set so far in humans, providing the sequencing and analysis of 2504 genomes from 26 populations and reporting >84 million variants. The availability of this sequence data provides the human lineage with an invaluable resource for population genomics studies, allowing the testing of molecular population genetics hypotheses and eventually the understanding of the evolutionary dynamics of genetic variation in human populations. Here we present PopHuman, a new population genomics-oriented genome browser based on JBrowse that allows the interactive visualization and retrieval of an extensive inventory of population genetics metrics. Efficient and reliable parameter estimates have been computed using a novel pipeline that faces the unique features and limitations of the 1000GP data, and include a battery of nucleotide variation measures, divergence and linkage disequilibrium parameters, as well as different tests of neutrality, estimated in non-overlapping windows along the chromosomes and in annotated genes for all 26 populations of the 1000GP. PopHuman is open and freely available at http://pophuman.uab.cat. PMID:29059408
Manual of Protective Action Guides and Protective Actions for Nuclear Incidents. Revision
1980-06-01
acceptable dose. Simce the PAC In based on a projected does, It is used only In an expost facto effort to minmisea the risk from an swest which is occurring...statistical evaluation of epidmiological studies In groups of people wbo had been ezposed to radiation. Decisions concerning statistical effects on...protective actions. The Reactor Saety Study Indicates, for ezample, that major releases y beoon in the range of one-half how to as uab as 30 hours after an
Yue, Zongliang; Zheng, Qi; Neylon, Michael T; Yoo, Minjae; Shin, Jimin; Zhao, Zhiying; Tan, Aik Choon
2018-01-01
Abstract Integrative Gene-set, Network and Pathway Analysis (GNPA) is a powerful data analysis approach developed to help interpret high-throughput omics data. In PAGER 1.0, we demonstrated that researchers can gain unbiased and reproducible biological insights with the introduction of PAGs (Pathways, Annotated-lists and Gene-signatures) as the basic data representation elements. In PAGER 2.0, we improve the utility of integrative GNPA by significantly expanding the coverage of PAGs and PAG-to-PAG relationships in the database, defining a new metric to quantify PAG data qualities, and developing new software features to simplify online integrative GNPA. Specifically, we included 84 282 PAGs spanning 24 different data sources that cover human diseases, published gene-expression signatures, drug–gene, miRNA–gene interactions, pathways and tissue-specific gene expressions. We introduced a new normalized Cohesion Coefficient (nCoCo) score to assess the biological relevance of genes inside a PAG, and RP-score to rank genes and assign gene-specific weights inside a PAG. The companion web interface contains numerous features to help users query and navigate the database content. The database content can be freely downloaded and is compatible with third-party Gene Set Enrichment Analysis tools. We expect PAGER 2.0 to become a major resource in integrative GNPA. PAGER 2.0 is available at http://discovery.informatics.uab.edu/PAGER/. PMID:29126216
Impact of elective versus required medical school research experiences on career outcomes
Weaver, Alice N; McCaw, Tyler R; Fifolt, Matthew; Hites, Lisle; Lorenz, Robin G
2018-01-01
Many US medical schools have added a scholarly or research requirement as a potential intervention to increase the number of medical students choosing to become academic physicians and physician scientists. We designed a retrospective qualitative survey study to evaluate the impact of medical school research at the University of Alabama at Birmingham (UAB) on career choices. A survey tool was developed consisting of 74 possible questions with built-in skip patterns to customize the survey to each participant. The survey was administered using the web-based program Qualtrics to UAB School of Medicine alumni graduating between 2000 and 2014. Alumni were contacted 3 times at 2-week intervals during the year 2015, resulting in 168 completed surveys (11.5% response rate). MD/PhD graduates were excluded from the study. Most respondents completed elective research, typically for reasons relating to career advancement. 24 per cent said medical school research increased their desire for research involvement in the future, a response that positively correlated with mentorship level and publication success. Although completion of medical school research was positively correlated with current research involvement, the strongest predictor for a physician scientist career was pre-existing passion for research (p=0.008). In contrast, students motivated primarily by curricular requirement were less likely to pursue additional research opportunities. Positive medical school research experiences were associated with increased postgraduate research in our study. However, we also identified a strong relationship between current research activity and passion for research, which may predate medical school. PMID:28270407
Effect of endodontic irrigants on biofilm matrix polysaccharides.
Tawakoli, P N; Ragnarsson, K T; Rechenberg, D K; Mohn, D; Zehnder, M
2017-02-01
To specifically investigate the effect of endodontic irrigants at their clinical concentration on matrix polysaccharides of cultured biofilms. Saccharolytic effects of 3% H 2 O 2 , 2% chlorhexidine (CHX), 17% EDTA, 5% NaOCl and 0.9% saline (control) were tested using agarose (α 1-3 and β 1-4 glycosidic bonds) blocks (n = 3) in a weight assay. The irrigants were also applied to three-species biofilms (Streptococcus mutans UAB 159, Streptococcus oralis OMZ 607 and Actinomyces oris OMZ 745) grown anaerobically on hydroxyapatite discs (n = 6). Glycoconjugates in the matrix and total bacterial cell volumes were determined using combined Concanavalin A-/Syto 59-staining and confocal laser-scanning microscopy. Volumes of each scanned area (triplicates/sample) were calculated using Imaris software. Data were compared between groups using one-way anova/Tukey HSD, α = 0.05. The weight assay revealed that NaOCl was the only irrigant under investigation capable of dissolving the agarose blocks. NaOCl eradicated stainable matrix and bacteria in cultured biofilms after 1 min of exposure (P < 0.05 compared to all groups, volumes in means ± standard deviation, 10 -3 mm 3 per 0.6 mm 2 disc; NaOCl matrix: 0.10 ± 0.08, bacteria: 0.03 ± 0.06; saline control matrix: 4.01 ± 1.14, bacteria: 11.56 ± 3.02). EDTA also appeared to have some effect on the biofilm matrix (EDTA matrix: 1.90 ± 0.33, bacteria: 9.26 ± 2.21), whilst H 2 O 2 and CHX merely reduced bacterial cell volumes. Sodium hypochlorite can break glycosidic bonds. It dissolves glycoconjugates in the biofilm matrix. It also lyses bacterial cells. © 2015 International Endodontic Journal. Published by John Wiley & Sons Ltd.
PopHuman: the human population genomics browser.
Casillas, Sònia; Mulet, Roger; Villegas-Mirón, Pablo; Hervas, Sergi; Sanz, Esteve; Velasco, Daniel; Bertranpetit, Jaume; Laayouni, Hafid; Barbadilla, Antonio
2018-01-04
The 1000 Genomes Project (1000GP) represents the most comprehensive world-wide nucleotide variation data set so far in humans, providing the sequencing and analysis of 2504 genomes from 26 populations and reporting >84 million variants. The availability of this sequence data provides the human lineage with an invaluable resource for population genomics studies, allowing the testing of molecular population genetics hypotheses and eventually the understanding of the evolutionary dynamics of genetic variation in human populations. Here we present PopHuman, a new population genomics-oriented genome browser based on JBrowse that allows the interactive visualization and retrieval of an extensive inventory of population genetics metrics. Efficient and reliable parameter estimates have been computed using a novel pipeline that faces the unique features and limitations of the 1000GP data, and include a battery of nucleotide variation measures, divergence and linkage disequilibrium parameters, as well as different tests of neutrality, estimated in non-overlapping windows along the chromosomes and in annotated genes for all 26 populations of the 1000GP. PopHuman is open and freely available at http://pophuman.uab.cat. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
NASA Technical Reports Server (NTRS)
Al-Hamdan, Mohammad; Crosson, William; Economou, Sigrid; Estes, Maurice, Jr.; Estes, Sue; Hemmings, Sarah; Kent, Shia; Quattrochi, Dale; Wade, Gina; McClure, Leslie
2011-01-01
NASA Marshall Space Flight Center is collaborating with the University of Alabama at Birmingham (UAB) School of Public Health and the Centers for Disease Control and Prevention (CDC) National Center for Public Health Informatics to address issues of environmental health and enhance public health decision making by utilizing NASA remotely sensed data and products. The objectives of this study are to develop high-quality spatial data sets of environmental variables, link these with public health data from a national cohort study, and deliver the linked data sets and associated analyses to local, state and federal end-user groups. Three daily environmental data sets will be developed for the conterminous U.S. on different spatial resolutions for the period 2003-2008: (1) spatial surfaces of estimated fine particulate matter (PM2.5) exposures on a 10-km grid utilizing the US Environmental Protection Agency (EPA) ground observations and NASA's MODerate-resolution Imaging Spectroradiometer (MODIS) data; (2) a 1-km grid of Land Surface Temperature (LST) using MODIS data; and (3) a 12-km grid of daily Solar Insolation (SI) using the North American Land Data Assimilation System (NLDAS) forcing data. These environmental data sets will be linked with public health data from the UAB REasons for Geographic And Racial Differences in Stroke (REGARDS) national cohort study to determine whether exposures to these environmental risk factors are related to cognitive decline and other health outcomes. These environmental datasets and public health linkage analyses will be disseminated to end-users for decision making through the CDC Wide-ranging Online Data for Epidemiologic Research (WONDER) system.
Measuring Resident Physicians' Performance of Preventive Care
Palonen, Katri P; Allison, Jeroan J; Heudebert, Gustavo R; Willett, Lisa L; Kiefe, Catarina I; Wall, Terry C; Houston, Thomas K
2006-01-01
BACKGROUND The Accreditation Council for Graduate Medical Education has suggested various methods for evaluation of practice-based learning and improvement competency, but data on implementation of these methods are limited. OBJECTIVE To compare medical record review and patient surveys on evaluating physician performance in preventive services in an outpatient resident clinic. DESIGN Within an ongoing quality improvement project, we collected baseline performance data on preventive services provided for patients at the University of Alabama at Birmingham (UAB) Internal Medicine Residents' ambulatory clinic. PARTICIPANTS Seventy internal medicine and medicine-pediatrics residents from the UAB Internal Medicine Residency program. MEASUREMENTS Resident- and clinic-level comparisons of aggregated patient survey and chart documentation rates of (1) screening for smoking status, (2) advising smokers to quit, (3) cholesterol screening, (4) mammography screening, and (5) pneumonia vaccination. RESULTS Six hundred and fifty-nine patient surveys and 761 charts were abstracted. At the clinic level, rates for screening of smoking status, recommending mammogram, and for cholesterol screening were similar (difference <5%) between the 2 methods. Higher rates for pneumonia vaccination (76% vs 67%) and advice to quit smoking (66% vs 52%) were seen on medical record review versus patient surveys. However, within-resident (N=70) comparison of 2 methods of estimating screening rates contained significant variability. The cost of medical record review was substantially higher ($107 vs $17/physician). CONCLUSIONS Medical record review and patient surveys provided similar rates for selected preventive health measures at the clinic level, with the exception of pneumonia vaccination and advising to quit smoking. A large variation among individual resident providers was noted. PMID:16499544
Impact of elective versus required medical school research experiences on career outcomes.
Weaver, Alice N; McCaw, Tyler R; Fifolt, Matthew; Hites, Lisle; Lorenz, Robin G
2017-06-01
Many US medical schools have added a scholarly or research requirement as a potential intervention to increase the number of medical students choosing to become academic physicians and physician scientists. We designed a retrospective qualitative survey study to evaluate the impact of medical school research at the University of Alabama at Birmingham (UAB) on career choices. A survey tool was developed consisting of 74 possible questions with built-in skip patterns to customize the survey to each participant. The survey was administered using the web-based program Qualtrics to UAB School of Medicine alumni graduating between 2000 and 2014. Alumni were contacted 3 times at 2-week intervals during the year 2015, resulting in 168 completed surveys (11.5% response rate). MD/PhD graduates were excluded from the study. Most respondents completed elective research, typically for reasons relating to career advancement. 24 per cent said medical school research increased their desire for research involvement in the future, a response that positively correlated with mentorship level and publication success. Although completion of medical school research was positively correlated with current research involvement, the strongest predictor for a physician scientist career was pre-existing passion for research (p=0.008). In contrast, students motivated primarily by curricular requirement were less likely to pursue additional research opportunities. Positive medical school research experiences were associated with increased postgraduate research in our study. However, we also identified a strong relationship between current research activity and passion for research, which may predate medical school. Copyright © 2017 American Federation for Medical Research.
Cofer, Kevin D; Hollis, Robert H; Goss, Lauren; Morris, Melanie S; Porterfield, John R; Chu, Daniel I
2018-02-23
To evaluate whether burnout was associated with emotional intelligence and job performance in surgical residents. General surgery residents at a single institution were surveyed using the Maslach Burnout Inventory (MBI) and trait EI questionnaire (TEIQ-SF). Burnout was defined as scoring in 2 of the 3 following domains; Emotional Exhaustion (high), Depersonalization (high), and Personal Accomplishment (low). Job performance was evaluated using faculty evaluations of clinical competency-based surgical milestones and standardized test scores including the American Board of Surgery In-Training Exam (ABSITE) and the United States Medical Licensing Examination (USMLE) Step 3. USMLE Step 1 and USMLE Step 2, which were taken prior to residency training, were included to examine possible associations of burnout with USMLE examinations. Statistical comparison was made using Pearson correlation and simple linear regression adjusting for PGY level. This study was conducted at the University of Alabama at Birmingham (UAB) general surgery residency program. All current and incoming general surgery residents at UAB were invited to participate in this study. Forty residents participated in the survey (response rate 77%). Ten residents, evenly distributed from incoming residents to PGY-4, had burnout (25%). Mean global EI was lower in residents with burnout versus those without burnout (3.71 vs 3.9, p = 0.02). Of the 4 facets of EI, mean self-control values were lower in residents with burnout versus those without burnout (3.3 vs 4.06, p < 0.01). Each component of burnout was associated with global EI, with the strongest correlation being with personal accomplishment (r = 0.64; p < 0.01). Residents with burnout did not have significantly different mean scores for USMLE Step 1 (229 vs 237, p = 0.12), Step 2 (248 vs 251, p = 0.56), Step 3 (223 vs 222, p = 0.97), or ABSITE percentile (44.6 vs 58, p = 0.33) compared to residents without burnout. Personal accomplishment was associated with ABSITE percentile scores (r = 0.35; p = 0.049). None of the 16 surgical milestone scores were significantly associated with burnout. Burnout is present in surgery residents and associated with emotional intelligence. There was no association of burnout with USMLE scores, ABSITE percentile, or surgical milestones. Traditional methods of assessing resident performance may not be capturing burnout and strategies to reduce burnout should consider targeting emotional intelligence. Published by Elsevier Inc.
Boggiano, M.M.; Burgess, E.E.; Turan, B.; Soleymani, T.; Daniel, S.; Vinson, L.D.; Lokken, K.L.; Wingo, B.C.; Morse, A.
2016-01-01
The aim of this study was to use the Palatable Eating Motives Scale (PEMS) to determine if and what motives for eating tasty foods (e.g., junk food, fast food, and desserts) are associated with binge-eating in two diverse populations. BMI and scores on the PEMS, Yale Food Addiction Scale (YFAS), and Binge-eating Scale (BES) were obtained from 247 undergraduates at the University of Alabama at Birmingham (UAB) and 249 weight-loss seeking patients at the UAB EatRight program. Regression analyses revealed that eating tasty foods to forget worries and problems and help alleviate negative feelings (i.e., the 4-item Coping motive) was associated with binge-eating independently of any variance in BES scores due to sex, age, ethnicity, BMI, other PEMS motives, and YFAS scores in both students (R2 = .57) and patients (R2 = .55). Coping also was associated with higher BMI in students (p < 0.01), and in patients despite their truncated BMI range (p < 0.05). Among students, the motives Conformity and Reward Enhancement were also independently associated with binge-eating. For this younger sample with a greater range of BES scores, eating for these motives, but not for Social ones, may indicate early maladaptive eating habits that could later develop into disorders characterized by binge-eating if predisposing factors are present. Thus, identifying one’s tasty food motive or motives can potentially be used to thwart the development of BED and obesity, especially if the motive is Coping. Identifying one’s PEMS motives should also help personalize conventional treatments for binge-eating and obesity toward improved outcomes. PMID:25169880
Weaver, Alice N; Burch, M Benjamin; Cooper, Tiffiny S; Della Manna, Deborah L; Wei, Shi; Ojesina, Akinyemi I; Rosenthal, Eben L; Yang, Eddy S
2016-09-01
Oral squamous cell carcinoma (OSCC) is a cancer subtype that lacks validated prognostic and therapeutic biomarkers, and human papillomavirus status has not proven beneficial in predicting patient outcomes. A gene expression pathway analysis was conducted using OSCC patient specimens to identify molecular targets that may improve management of this disease. RNA was isolated from 19 OSCCs treated surgically at the University of Alabama at Birmingham (UAB; Birmingham, AL) and evaluated using the NanoString nCounter system. Results were confirmed using the oral cavity subdivision of the Head and Neck Squamous Cell Carcinoma Cancer (HNSCC) study generated by The Cancer Genome Atlas (TCGA) Research Network. Further characterization of the in vitro phenotype produced by Notch pathway activation in HNSCC cell lines included gene expression, proliferation, cell cycle, migration, invasion, and radiosensitivity. In both UAB and TCGA samples, Notch pathway upregulation was significantly correlated with patient mortality status and with expression of the proinvasive gene FGF1 In vitro Notch activation in HNSCC cells increased transcription of FGF1 and induced a marked increase in cell migration and invasion, which was fully abrogated by FGF1 knockdown. These results reveal that increased Notch pathway signaling plays a role in cancer progression and patient outcomes in OSCC. Accordingly, the Notch-FGF interaction should be further studied as a prognostic biomarker and potential therapeutic target for OSCC. Patients with squamous cell carcinoma of the oral cavity who succumb to their disease are more likely to have upregulated Notch signaling, which may mediate a more invasive phenotype through increased FGF1 transcription. Mol Cancer Res; 14(9); 883-91. ©2016 AACR. ©2016 American Association for Cancer Research.
Faulkner, C B; Simecka, J W; Davidson, M K; Davis, J K; Schoeb, T R; Lindsey, J R; Everson, M P
1995-01-01
Studies were conducted to determine whether the production of various cytokines is associated with Mycoplasma pulmonis disease expression. Susceptible C3H/HeN and resistant C57BL/6N mice were inoculated intranasally with 10(7) CFU of virulent M. pulmonis UAB CT or avirulent M. pulmonis UAB T. Expression of genes for tumor necrosis factor alpha (TNF-alpha), interleukin 1 alpha (IL-1 alpha), IL-1 beta, IL-6, and gamma interferon (IFN-gamma) in whole lung tissue and TNF-alpha gene expression in bronchoalveolar lavage (BAL) cells was determined by reverse transcription-PCR using specific cytokine primers at various times postinoculation. In addition, concentrations of TNF-alpha, IL-1, IL-6, and IFN-gamma were determined in BAL fluid and serum samples at various times postinoculation. Our results showed that there was a sequential appearance of cytokines in the lungs of infected mice: TNF-alpha, produced primarily by BAL cells, appeared first, followed by IL-1 and IL-6, which were followed by IFN-gamma. Susceptible C3H/HeN mice had higher and more persistent concentrations of TNF-alpha and IL-6 in BAL fluid than did resistant C57BL/6N mice, indicating that TNF-alpha and possibly IL-6 are important factors in pathogenesis of acute M. pulmonis disease in mice. Serum concentrations of IL-6 were elevated in C3H/HeN mice, but not C57BL/6N mice, following infection with M. pulmonis, suggesting that IL-6 has both local and systemic effects in M. pulmonis disease. PMID:7558323
Satellite-based products for forest fire prevention and recovery: the PREFER experience
NASA Astrophysics Data System (ADS)
Laneve, Giovanni; Bernini, Guido; Fusilli, Lorenzo; Marzialetti, Pablo
2016-08-01
PREFER is a three years projects funded in 2012 in the framework of the FP7 Emergency call. The project objective was to set up a space-based service infrastructure and up-to-date cartographic products, based on remote sensing data, to support the preparedness, prevention, recovery and reconstruction phases of the Forest Fires emergency cycle in the European Mediterranean Region. The products of PREFER were tested and evaluated during the training and the demonstration period of the project, which coincided with the forest fire season of 2015. The products were tested using the online PREFER service and the tests were linked to the pilot areas of the project which are Minho (Portugal), Messenia (Greece), Andalucía (Spain), Sardinia (Italy) and Corse (France). Testing was performed by members of the User Advisory Board (UAB) starting from the training event organized in Coimbra, Portugal in June 2015. The tests continued till the end of the fire season (October 2015) and the end users were provided with updated information for the areas of interest during the entire demonstration period. Due to data availability restrictions (in particular to ancillary required data) not all products were available for testing in all the test areas. However all the PREFER products were tested at least in one pilot area and in cooperation with at least one end user organization. It has to be mentioned that beyond the product suitability and usefulness to the end users the tests included evaluation of the usability of the web-based service of PREFER and the respective quality of service provided. This paper aims at presenting the results of the demonstration activity, the lessons learned and ideas for further enhancement of the developed products devoted to support prevention and recovery phases of the wildfire cycle.
NASA Astrophysics Data System (ADS)
Song, Xiaoning; Feng, Zhen-Hua; Hu, Guosheng; Yang, Xibei; Yang, Jingyu; Qi, Yunsong
2015-09-01
This paper proposes a progressive sparse representation-based classification algorithm using local discrete cosine transform (DCT) evaluation to perform face recognition. Specifically, the sum of the contributions of all training samples of each subject is first taken as the contribution of this subject, then the redundant subject with the smallest contribution to the test sample is iteratively eliminated. Second, the progressive method aims at representing the test sample as a linear combination of all the remaining training samples, by which the representation capability of each training sample is exploited to determine the optimal "nearest neighbors" for the test sample. Third, the transformed DCT evaluation is constructed to measure the similarity between the test sample and each local training sample using cosine distance metrics in the DCT domain. The final goal of the proposed method is to determine an optimal weighted sum of nearest neighbors that are obtained under the local correlative degree evaluation, which is approximately equal to the test sample, and we can use this weighted linear combination to perform robust classification. Experimental results conducted on the ORL database of faces (created by the Olivetti Research Laboratory in Cambridge), the FERET face database (managed by the Defense Advanced Research Projects Agency and the National Institute of Standards and Technology), AR face database (created by Aleix Martinez and Robert Benavente in the Computer Vision Center at U.A.B), and USPS handwritten digit database (gathered at the Center of Excellence in Document Analysis and Recognition at SUNY Buffalo) demonstrate the effectiveness of the proposed method.
Testing Scientific Software: A Systematic Literature Review.
Kanewala, Upulee; Bieman, James M
2014-10-01
Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques.
NASA Technical Reports Server (NTRS)
Hartley, Garen
2018-01-01
NASA's vision for humans pursuing deep space flight involves the collection of science in low earth orbit aboard the International Space Station (ISS). As a service to the science community, Johnson Space Center (JSC) has developed hardware and processes to preserve collected science on the ISS and transfer it safely back to the Principal Investigators. This hardware includes an array of freezers, refrigerators, and incubators. The Cold Stowage team is part of the International Space Station (ISS) program. JSC manages the operation, support and integration tasks provided by Jacobs Technology and the University of Alabama Birmingham (UAB). Cold Stowage provides controlled environments to meet temperature requirements during ascent, on-orbit operations and return, in relation to International Space Station Payload Science.
Testing Scientific Software: A Systematic Literature Review
Kanewala, Upulee; Bieman, James M.
2014-01-01
Context Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. Objective This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. Method We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. Results We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Conclusions Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques. PMID:25125798
A Framework of the Use of Information in Software Testing
ERIC Educational Resources Information Center
Kaveh, Payman
2010-01-01
With the increasing role that software systems play in our daily lives, software quality has become extremely important. Software quality is impacted by the efficiency of the software testing process. There are a growing number of software testing methodologies, models, and initiatives to satisfy the need to improve software quality. The main…
The impact of roofing material on building energy performance
NASA Astrophysics Data System (ADS)
Badiee, Ali
The last decade has seen an increase in the efficient use of energy sources such as water, electricity, and natural gas as well as a variety of roofing materials, in the heating and cooling of both residential and commercial infrastructure. Oil costs, coal and natural gas prices remain high and unstable. All of these instabilities and increased costs have resulted in higher heating and cooling costs, and engineers are making an effort to keep them under control by using energy efficient building materials. The building envelope (that which separates the indoor and outdoor environments of a building) plays a significant role in the rate of building energy consumption. An appropriate architectural design of a building envelope can considerably lower the energy consumption during hot summers and cold winters, resulting in reduced HVAC loads. Several building components (walls, roofs, fenestration, foundations, thermal insulation, external shading devices, thermal mass, etc.) make up this essential part of a building. However, thermal insulation of a building's rooftop is the most essential part of a building envelope in that it reduces the incoming "heat flux" (defined as the amount of heat transferred per unit area per unit time from or to a surface) (Sadineni et al., 2011). Moreover, more than 60% of heat transfer occurs through the roof regardless of weather, since a roof is often the building surface that receives the largest amount of solar radiation per square annually (Suman, and Srivastava, 2009). Hence, an argument can be made that the emphasis on building energy efficiency has influenced roofing manufacturing more than any other building envelope component. This research project will address roofing energy performance as the source of nearly 60% of the building heat transfer (Suman, and Srivastava, 2009). We will also rank different roofing materials in terms of their energy performance. Other parts of the building envelope such as walls, foundation, fenestration, etc. and their thermal insulation energy performance value will not be included this study. Five different UAB campus buildings with the same reinforced concrete structure (RC Structure), each having a different roofing material were selected, surveyed, analyzed, and evaluated in this study. Two primary factors are considered in this evaluation: the energy consumption and utility bills. The data has been provided by the UAB Facilities Management Department and has been monitored from 2007 to 2013 using analysis of variance (ANOVA) and t-test methods. The energy utilities examined in this study involved electricity, domestic water, and natural gas. They were measured separately in four different seasons over a seven-year time period. The building roofing materials consisted of a green roof, a white (reflective) roof, a river rock roof, a concrete paver roof, and a traditional black roof. Results of the tested roofs from this study indicate that the white roof is the most energy efficient roofing material.
Executable assertions and flight software
NASA Technical Reports Server (NTRS)
Mahmood, A.; Andrews, D. M.; Mccluskey, E. J.
1984-01-01
Executable assertions are used to test flight control software. The techniques used for testing flight software; however, are different from the techniques used to test other kinds of software. This is because of the redundant nature of flight software. An experimental setup for testing flight software using executable assertions is described. Techniques for writing and using executable assertions to test flight software are presented. The error detection capability of assertions is studied and many examples of assertions are given. The issues of placement and complexity of assertions and the language features to support efficient use of assertions are discussed.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-22
... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software...) is issuing for public comment draft regulatory guide (DG), DG-1208, ``Software Unit Testing for Digital Computer Software used in Safety Systems of Nuclear Power Plants.'' The DG-1208 is proposed...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-02
... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software... revised regulatory guide (RG), revision 1 of RG 1.171, ``Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This RG endorses American National Standards...
NASA Technical Reports Server (NTRS)
Jain, Abhinandan; Cameron, Jonathan M.; Myint, Steven
2013-01-01
This software runs a suite of arbitrary software tests spanning various software languages and types of tests (unit level, system level, or file comparison tests). The dtest utility can be set to automate periodic testing of large suites of software, as well as running individual tests. It supports distributing multiple tests over multiple CPU cores, if available. The dtest tool is a utility program (written in Python) that scans through a directory (and its subdirectories) and finds all directories that match a certain pattern and then executes any tests in that directory as described in simple configuration files.
15 CFR 995.27 - Format validation software testing.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 15 Commerce and Foreign Trade 3 2013-01-01 2013-01-01 false Format validation software testing... of NOAA ENC Products § 995.27 Format validation software testing. Tests shall be performed verifying, as far as reasonable and practicable, that CEVAD's data testing software performs the checks, as...
15 CFR 995.27 - Format validation software testing.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 15 Commerce and Foreign Trade 3 2014-01-01 2014-01-01 false Format validation software testing... of NOAA ENC Products § 995.27 Format validation software testing. Tests shall be performed verifying, as far as reasonable and practicable, that CEVAD's data testing software performs the checks, as...
15 CFR 995.27 - Format validation software testing.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 15 Commerce and Foreign Trade 3 2012-01-01 2012-01-01 false Format validation software testing... of NOAA ENC Products § 995.27 Format validation software testing. Tests shall be performed verifying, as far as reasonable and practicable, that CEVAD's data testing software performs the checks, as...
15 CFR 995.27 - Format validation software testing.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 15 Commerce and Foreign Trade 3 2011-01-01 2011-01-01 false Format validation software testing... of NOAA ENC Products § 995.27 Format validation software testing. Tests shall be performed verifying, as far as reasonable and practicable, that CEVAD's data testing software performs the checks, as...
Wear, strength, modulus and hardness of CAD/CAM restorative materials.
Lawson, Nathaniel C; Bansal, Ritika; Burgess, John O
2016-11-01
To measure the mechanical properties of several CAD/CAM materials, including lithium disilicate (e.max CAD), lithium silicate/zirconia (Celtra Duo), 3 resin composites (Cerasmart, Lava Ultimate, Paradigm MZ100), and a polymer infiltrated ceramic (Enamic). CAD/CAM blocks were sectioned into 2.5mm×2.5mm×16mm bars for flexural strength and elastic modulus testing and 4mm thick blocks for hardness and wear testing. E.max CAD and half the Celtra Duo specimens were treated in a furnace. Flexural strength specimens (n=10) were tested in a three-point bending fixture. Vickers microhardness (n=2, 5 readings per specimen) was measured with a 1kg load and 15s dwell time. The CAD/CAM materials as well as labial surfaces of human incisors were mounted in the UAB wear device. Cusps of human premolars were mounted as antagonists. Specimens were tested for 400,000 cycles at 20N force, 2mm sliding distance, 1Hz frequency, 24°C, and 33% glycerin lubrication. Volumetric wear and opposing enamel wear were measured with non-contact profilometry. Data were analyzed with 1-way ANOVA and Tukey post-hoc analysis (alpha=0.05). Specimens were observed with SEM. Properties were different for each material (p<0.01). E.max CAD and Celtra Duo were generally stronger, stiffer, and harder than the other materials. E.max CAD, Celtra Duo, Enamic, and enamel demonstrated signs of abrasive wear, whereas Cerasmart, Lava Ultimate, Paradigm MZ100 demonstrated signs of fatigue. Resin composite and resin infiltrated ceramic materials have demonstrated adequate wear resistance for load bearing restorations, however, they will require at least similar material thickness as lithium disilicate restorations due to their strength. Copyright © 2016 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Simulation Testing of Embedded Flight Software
NASA Technical Reports Server (NTRS)
Shahabuddin, Mohammad; Reinholtz, William
2004-01-01
Virtual Real Time (VRT) is a computer program for testing embedded flight software by computational simulation in a workstation, in contradistinction to testing it in its target central processing unit (CPU). The disadvantages of testing in the target CPU include the need for an expensive test bed, the necessity for testers and programmers to take turns using the test bed, and the lack of software tools for debugging in a real-time environment. By virtue of its architecture, most of the flight software of the type in question is amenable to development and testing on workstations, for which there is an abundance of commercially available debugging and analysis software tools. Unfortunately, the timing of a workstation differs from that of a target CPU in a test bed. VRT, in conjunction with closed-loop simulation software, provides a capability for executing embedded flight software on a workstation in a close-to-real-time environment. A scale factor is used to convert between execution time in VRT on a workstation and execution on a target CPU. VRT includes high-resolution operating- system timers that enable the synchronization of flight software with simulation software and ground software, all running on different workstations.
Modular Rocket Engine Control Software (MRECS)
NASA Technical Reports Server (NTRS)
Tarrant, C.; Crook, J.
1998-01-01
The Modular Rocket Engine Control Software (MRECS) Program is a technology demonstration effort designed to advance the state-of-the-art in launch vehicle propulsion systems. Its emphasis is on developing and demonstrating a modular software architecture for advanced engine control systems that will result in lower software maintenance (operations) costs. It effectively accommodates software requirement changes that occur due to hardware technology upgrades and engine development testing. Ground rules directed by MSFC were to optimize modularity and implement the software in the Ada programming language. MRECS system software and the software development environment utilize Commercial-Off-the-Shelf (COTS) products. This paper presents the objectives, benefits, and status of the program. The software architecture, design, and development environment are described. MRECS tasks are defined and timing relationships given. Major accomplishments are listed. MRECS offers benefits to a wide variety of advanced technology programs in the areas of modular software architecture, reuse software, and reduced software reverification time related to software changes. MRECS was recently modified to support a Space Shuttle Main Engine (SSME) hot-fire test. Cold Flow and Flight Readiness Testing were completed before the test was cancelled. Currently, the program is focused on supporting NASA MSFC in accomplishing development testing of the Fastrac Engine, part of NASA's Low Cost Technologies (LCT) Program. MRECS will be used for all engine development testing.
Modeling Student Software Testing Processes: Attitudes, Behaviors, Interventions, and Their Effects
ERIC Educational Resources Information Center
Buffardi, Kevin John
2014-01-01
Effective software testing identifies potential bugs and helps correct them, producing more reliable and maintainable software. As software development processes have evolved, incremental testing techniques have grown in popularity, particularly with introduction of test-driven development (TDD). However, many programmers struggle to adopt TDD's…
Educational Software Acquisition for Microcomputers.
ERIC Educational Resources Information Center
Erikson, Warren; Turban, Efraim
1985-01-01
Examination of issues involved in acquiring appropriate microcomputer software for higher education focuses on the following points: developing your own software; finding commercially available software; using published evaluations; pre-purchase testing; customizing and adapting commercial software; post-purchase testing; and software use. A…
Factors That Affect Software Testability
NASA Technical Reports Server (NTRS)
Voas, Jeffrey M.
1991-01-01
Software faults that infrequently affect software's output are dangerous. When a software fault causes frequent software failures, testing is likely to reveal the fault before the software is releases; when the fault remains undetected during testing, it can cause disaster after the software is installed. A technique for predicting whether a particular piece of software is likely to reveal faults within itself during testing is found in [Voas91b]. A piece of software that is likely to reveal faults within itself during testing is said to have high testability. A piece of software that is not likely to reveal faults within itself during testing is said to have low testability. It is preferable to design software with higher testabilities from the outset, i.e., create software with as high of a degree of testability as possible to avoid the problems of having undetected faults that are associated with low testability. Information loss is a phenomenon that occurs during program execution that increases the likelihood that a fault will remain undetected. In this paper, I identify two brad classes of information loss, define them, and suggest ways of predicting the potential for information loss to occur. We do this in order to decrease the likelihood that faults will remain undetected during testing.
A methodology for testing fault-tolerant software
NASA Technical Reports Server (NTRS)
Andrews, D. M.; Mahmood, A.; Mccluskey, E. J.
1985-01-01
A methodology for testing fault tolerant software is presented. There are problems associated with testing fault tolerant software because many errors are masked or corrected by voters, limiter, or automatic channel synchronization. This methodology illustrates how the same strategies used for testing fault tolerant hardware can be applied to testing fault tolerant software. For example, one strategy used in testing fault tolerant hardware is to disable the redundancy during testing. A similar testing strategy is proposed for software, namely, to move the major emphasis on testing earlier in the development cycle (before the redundancy is in place) thus reducing the possibility that undetected errors will be masked when limiters and voters are added.
A high order approach to flight software development and testing
NASA Technical Reports Server (NTRS)
Steinbacher, J.
1981-01-01
The use of a software development facility is discussed as a means of producing a reliable and maintainable ECS software system, and as a means of providing efficient use of the ECS hardware test facility. Principles applied to software design are given, including modularity, abstraction, hiding, and uniformity. The general objectives of each phase of the software life cycle are also given, including testing, maintenance, code development, and requirement specifications. Software development facility tools are summarized, and tool deficiencies recognized in the code development and testing phases are considered. Due to limited lab resources, the functional simulation capabilities may be indispensable in the testing phase.
NASA Astrophysics Data System (ADS)
Shao, Hongbing
Software testing with scientific software systems often suffers from test oracle problem, i.e., lack of test oracles. Amsterdam discrete dipole approximation code (ADDA) is a scientific software system that can be used to simulate light scattering of scatterers of various types. Testing of ADDA suffers from "test oracle problem". In this thesis work, I established a testing framework to test scientific software systems and evaluated this framework using ADDA as a case study. To test ADDA, I first used CMMIE code as the pseudo oracle to test ADDA in simulating light scattering of a homogeneous sphere scatterer. Comparable results were obtained between ADDA and CMMIE code. This validated ADDA for use with homogeneous sphere scatterers. Then I used experimental result obtained for light scattering of a homogeneous sphere to validate use of ADDA with sphere scatterers. ADDA produced light scattering simulation comparable to the experimentally measured result. This further validated the use of ADDA for simulating light scattering of sphere scatterers. Then I used metamorphic testing to generate test cases covering scatterers of various geometries, orientations, homogeneity or non-homogeneity. ADDA was tested under each of these test cases and all tests passed. The use of statistical analysis together with metamorphic testing is discussed as a future direction. In short, using ADDA as a case study, I established a testing framework, including use of pseudo oracles, experimental results and the metamorphic testing techniques to test scientific software systems that suffer from test oracle problems. Each of these techniques is necessary and contributes to the testing of the software under test.
MATTS- A Step Towards Model Based Testing
NASA Astrophysics Data System (ADS)
Herpel, H.-J.; Willich, G.; Li, J.; Xie, J.; Johansen, B.; Kvinnesland, K.; Krueger, S.; Barrios, P.
2016-08-01
In this paper we describe a Model Based approach to testing of on-board software and compare it with traditional validation strategy currently applied to satellite software. The major problems that software engineering will face over at least the next two decades are increasing application complexity driven by the need for autonomy and serious application robustness. In other words, how do we actually get to declare success when trying to build applications one or two orders of magnitude more complex than today's applications. To solve the problems addressed above the software engineering process has to be improved at least for two aspects: 1) Software design and 2) Software testing. The software design process has to evolve towards model-based approaches with extensive use of code generators. Today, testing is an essential, but time and resource consuming activity in the software development process. Generating a short, but effective test suite usually requires a lot of manual work and expert knowledge. In a model-based process, among other subtasks, test construction and test execution can also be partially automated. The basic idea behind the presented study was to start from a formal model (e.g. State Machines), generate abstract test cases which are then converted to concrete executable test cases (input and expected output pairs). The generated concrete test cases were applied to an on-board software. Results were collected and evaluated wrt. applicability, cost-efficiency, effectiveness at fault finding, and scalability.
Using Automation to Improve the Flight Software Testing Process
NASA Technical Reports Server (NTRS)
ODonnell, James R., Jr.; Andrews, Stephen F.; Morgenstern, Wendy M.; Bartholomew, Maureen O.; McComas, David C.; Bauer, Frank H. (Technical Monitor)
2001-01-01
One of the critical phases in the development of a spacecraft attitude control system (ACS) is the testing of its flight software. The testing (and test verification) of ACS flight software requires a mix of skills involving software, attitude control, data manipulation, and analysis. The process of analyzing and verifying flight software test results often creates a bottleneck which dictates the speed at which flight software verification can be conducted. In the development of the Microwave Anisotropy Probe (MAP) spacecraft ACS subsystem, an integrated design environment was used that included a MAP high fidelity (HiFi) simulation, a central database of spacecraft parameters, a script language for numeric and string processing, and plotting capability. In this integrated environment, it was possible to automate many of the steps involved in flight software testing, making the entire process more efficient and thorough than on previous missions. In this paper, we will compare the testing process used on MAP to that used on previous missions. The software tools that were developed to automate testing and test verification will be discussed, including the ability to import and process test data, synchronize test data and automatically generate HiFi script files used for test verification, and an automated capability for generating comparison plots. A summary of the perceived benefits of applying these test methods on MAP will be given. Finally, the paper will conclude with a discussion of re-use of the tools and techniques presented, and the ongoing effort to apply them to flight software testing of the Triana spacecraft ACS subsystem.
Using Automation to Improve the Flight Software Testing Process
NASA Technical Reports Server (NTRS)
ODonnell, James R., Jr.; Morgenstern, Wendy M.; Bartholomew, Maureen O.
2001-01-01
One of the critical phases in the development of a spacecraft attitude control system (ACS) is the testing of its flight software. The testing (and test verification) of ACS flight software requires a mix of skills involving software, knowledge of attitude control, and attitude control hardware, data manipulation, and analysis. The process of analyzing and verifying flight software test results often creates a bottleneck which dictates the speed at which flight software verification can be conducted. In the development of the Microwave Anisotropy Probe (MAP) spacecraft ACS subsystem, an integrated design environment was used that included a MAP high fidelity (HiFi) simulation, a central database of spacecraft parameters, a script language for numeric and string processing, and plotting capability. In this integrated environment, it was possible to automate many of the steps involved in flight software testing, making the entire process more efficient and thorough than on previous missions. In this paper, we will compare the testing process used on MAP to that used on other missions. The software tools that were developed to automate testing and test verification will be discussed, including the ability to import and process test data, synchronize test data and automatically generate HiFi script files used for test verification, and an automated capability for generating comparison plots. A summary of the benefits of applying these test methods on MAP will be given. Finally, the paper will conclude with a discussion of re-use of the tools and techniques presented, and the ongoing effort to apply them to flight software testing of the Triana spacecraft ACS subsystem.
NASA Technical Reports Server (NTRS)
Markos, H.
1978-01-01
Status of the computer programs dealing with space shuttle orbiter avionics is reported. Specific topics covered include: delivery status; SSW software; SM software; DL software; GNC software; level 3/4 testing; level 5 testing; performance analysis, SDL readiness for entry first article configuration inspection; and verification assessment.
Noises of Klystron Oscillators of a Small Power,
1986-07-03
mtro * mOP gC~~MHOCIA77p CSiPa38em8U- a#a ho m~bcyia mop. &m6ftm t hCpo deec tio. - Oramo 070mex ’nrm C 0UAbMpoM Mop $/cu~u.410& UWU ’vaemom Fig. 2.20...MHpID, 196.5. 59. K p e A H r e A b H. C. Wymonuie napaMerphi paAHtoflpKCmmihx ycr- *poRcTIa. H311-no0 43epr-Hna, 1969. 60. B a K y .a e a n1 . A...H. 2, ci~p. 328. 74. ql nH r a B. n1 . CradHIubHocTb ’EacTorh ICJICTpOHmhux remepaTopom 13 Pa3A~HUX cKcTeiaax CrAGIVI311lf. 4BoflpoCh
NASA Technical Reports Server (NTRS)
Hebert, Phillip W., Sr.; Davis, Dawn M.; Turowski, Mark P.; Holladay, Wendy T.; Hughes, Mark S.
2012-01-01
The advent of the commercial space launch industry and NASA's more recent resumption of operation of Stennis Space Center's large test facilities after thirty years of contractor control resulted in a need for a non-proprietary data acquisition systems (DAS) software to support government and commercial testing. The software is designed for modularity and adaptability to minimize the software development effort for current and future data systems. An additional benefit of the software's architecture is its ability to easily migrate to other testing facilities thus providing future commonality across Stennis. Adapting the software to other Rocket Propulsion Test (RPT) Centers such as MSFC, White Sands, and Plumbrook Station would provide additional commonality and help reduce testing costs for NASA. Ultimately, the software provides the government with unlimited rights and guarantees privacy of data to commercial entities. The project engaged all RPT Centers and NASA's Independent Verification & Validation facility to enhance product quality. The design consists of a translation layer which provides the transparency of the software application layers to underlying hardware regardless of test facility location and a flexible and easily accessible database. This presentation addresses system technical design, issues encountered, and the status of Stennis development and deployment.
SLS Flight Software Testing: Using a Modified Agile Software Testing Approach
NASA Technical Reports Server (NTRS)
Bolton, Albanie T.
2016-01-01
NASA's Space Launch System (SLS) is an advanced launch vehicle for a new era of exploration beyond earth's orbit (BEO). The world's most powerful rocket, SLS, will launch crews of up to four astronauts in the agency's Orion spacecraft on missions to explore multiple deep-space destinations. Boeing is developing the SLS core stage, including the avionics that will control vehicle during flight. The core stage will be built at NASA's Michoud Assembly Facility (MAF) in New Orleans, LA using state-of-the-art manufacturing equipment. At the same time, the rocket's avionics computer software is being developed here at Marshall Space Flight Center in Huntsville, AL. At Marshall, the Flight and Ground Software division provides comprehensive engineering expertise for development of flight and ground software. Within that division, the Software Systems Engineering Branch's test and verification (T&V) team uses an agile test approach in testing and verification of software. The agile software test method opens the door for regular short sprint release cycles. The idea or basic premise behind the concept of agile software development and testing is that it is iterative and developed incrementally. Agile testing has an iterative development methodology where requirements and solutions evolve through collaboration between cross-functional teams. With testing and development done incrementally, this allows for increased features and enhanced value for releases. This value can be seen throughout the T&V team processes that are documented in various work instructions within the branch. The T&V team produces procedural test results at a higher rate, resolves issues found in software with designers at an earlier stage versus at a later release, and team members gain increased knowledge of the system architecture by interfacing with designers. SLS Flight Software teams want to continue uncovering better ways of developing software in an efficient and project beneficial manner. Through agile testing, there has been increased value through individuals and interactions over processes and tools, improved customer collaboration, and improved responsiveness to changes through controlled planning. The presentation will describe agile testing methodology as taken with the SLS FSW Test and Verification team at Marshall Space Flight Center.
Overview of software development at the parabolic dish test site
NASA Technical Reports Server (NTRS)
Miyazono, C. K.
1985-01-01
The development history of the data acquisition and data analysis software is discussed. The software development occurred between 1978 and 1984 in support of solar energy module testing at the Jet Propulsion Laboratory's Parabolic Dish Test Site, located within Edwards Test Station. The development went through incremental stages, starting with a simple single-user BASIC set of programs, and progressing to the relative complex multi-user FORTRAN system that was used until the termination of the project. Additional software in support of testing is discussed including software in support of a meteorological subsystem and the Test Bed Concentrator Control Console interface. Conclusions and recommendations for further development are discussed.
Designing Test Suites for Software Interactions Testing
2004-01-01
the annual cost of insufficient software testing methods and tools in the United States is between 22.2 to 59.5 billion US dollars [13, 14]. This study...10 (2004), 1–29. [21] Cheng, C., Dumitrescu, A., and Schroeder , P. Generating small com- binatorial test suites to cover input-output relationships... Proceedings of the Conference on the Future of Software Engineering (May 2000), pp. 61 – 72. [51] Hartman, A. Software and hardware testing using
ETICS: the international software engineering service for the grid
NASA Astrophysics Data System (ADS)
Meglio, A. D.; Bégin, M.-E.; Couvares, P.; Ronchieri, E.; Takacs, E.
2008-07-01
The ETICS system is a distributed software configuration, build and test system designed to fulfil the needs of improving the quality, reliability and interoperability of distributed software in general and grid software in particular. The ETICS project is a consortium of five partners (CERN, INFN, Engineering Ingegneria Informatica, 4D Soft and the University of Wisconsin-Madison). The ETICS service consists of a build and test job execution system based on the Metronome software and an integrated set of web services and software engineering tools to design, maintain and control build and test scenarios. The ETICS system allows taking into account complex dependencies among applications and middleware components and provides a rich environment to perform static and dynamic analysis of the software and execute deployment, system and interoperability tests. This paper gives an overview of the system architecture and functionality set and then describes how the EC-funded EGEE, DILIGENT and OMII-Europe projects are using the software engineering services to build, validate and distribute their software. Finally a number of significant use and test cases will be described to show how ETICS can be used in particular to perform interoperability tests of grid middleware using the grid itself.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, G M
2005-05-03
For a number of years I had the pleasure of teaching Testing Seminars all over the world and meeting and learning from others in our field. Over a twelve year period, I always asked the following questions to Software Developers, Test Engineers, and Managers who took my two or three day seminar on Software Testing: 'When was the first time you heard the word test'? 'Where were you when you first heard the word test'? 'Who said the word test'? 'How did the word test make you feel'? Most of the thousands of responses were similar to 'It was mymore » third grade teacher at school, and I felt nervous and afraid'. Now there were a few exceptions like 'It was my third grade teacher, and I was happy and excited to show how smart I was'. But by and large, my informal survey found that 'testing' is a word to which most people attach negative meanings, based on its historical context. So why is this important to those of us in the software development business? Because I have found that a preponderance of software developers do not get real excited about hearing that the software they just wrote is going to be 'tested' by the Test Group. Typical reactions I have heard over the years run from: 'I'm sure there is nothing wrong with the software, so go ahead and test it, better you find defects than our customers'. to these extremes: 'There is no need to test my software because there is nothing wrong with it'. 'You are not qualified to test my software because you don't know as much as I do about it'. 'If any Test Engineers come into our office again to test our software we will throw them through the third floor window'. So why is there such a strong negative reaction to testing? It is primitive. It goes back to grade school for many of us. It is a negative word that congers up negative emotions. In other words, 'test' is a four letter word. How many of us associate 'Joy' with 'Test'? Not many. It is hard for most of us to reprogram associations learned at an early age. So what can we do about it (short of hypnotic therapy for software developers)? Well one concept I have used (and still use) is to not call testing 'testing'. Call it something else. Ever wonder why most of the Independent Software Testing groups are called Software Quality Assurance groups? Now you know. Software Quality Assurance is not such a negatively charged phrase, even though Software Quality Assurance is much more than simply testing. It was a real blessing when the concept of Validation and Verification came about for software. Now I define Validation to mean assuring that the product produced does the right thing (usually what the customer wants it to do), and verification means that the product was built the right way (in accordance with some good design principles and practices). So I have deliberately called the System Test Group the Verification and Validation Group, or V&V Group, as a way of avoiding the negative image problem. I remember once having a conversation with a developer colleague who said, in the heat of battle, that it was fine to V&V his code, just don't test it! Once again V&V includes many things besides testing, but it just doesn't sound like an onerous thing to do to software. In my current job, working at a highly regarded national laboratory with world renowned physicists, I have again encountered the negativity about testing software. Except here they don't take kindly to Software Quality Assurance or Software Verification and Validation either. After all, software is just a trivial tool to automate algorithms that implement physics models. Testing, SQA, and V&V take time and get in the way of completing ground breaking science experiments. So I have again had to change the name of software testing to something less negative in the physics world. I found (the hard way) that if I requested more time to do software experimentation, the physicist's resistance melted. And so the conversation continues, 'We have time to run more software experiments. Just don't waste any time testing the software'! In case the concept of not calling testing 'testing' appeals to you, and there may be an opportunity for you to take the sting out of the name at your place of employment, I have compiled a table of things that testing could be called besides 'testing'. Of course we can embellish this by adding some good sounding prefixes and suffixes also. To come up with alternate names for testing, pick a word from columns A, B, and C in the table below. For instance Unified Acceptance Trials (A2,B7,C3) or Tailored Observational Demonstration (A6,B5,C5) or Agile Criteria Scoring (A3,B8,C8) or Rapid Requirement Proof (A1,B9,C7) or Satisfaction Assurance (B10,C1). You can probably think of some additional combinations appropriate for your industry.« less
ERIC Educational Resources Information Center
Clarke, Peter J.; Davis, Debra; King, Tariq M.; Pava, Jairo; Jones, Edward L.
2014-01-01
As software becomes more ubiquitous and complex, the cost of software bugs continues to grow at a staggering rate. To remedy this situation, there needs to be major improvement in the knowledge and application of software validation techniques. Although there are several software validation techniques, software testing continues to be one of the…
Path generation algorithm for UML graphic modeling of aerospace test software
NASA Astrophysics Data System (ADS)
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Chen, Chao
2018-03-01
Aerospace traditional software testing engineers are based on their own work experience and communication with software development personnel to complete the description of the test software, manual writing test cases, time-consuming, inefficient, loopholes and more. Using the high reliability MBT tools developed by our company, the one-time modeling can automatically generate test case documents, which is efficient and accurate. UML model to describe the process accurately express the need to rely on the path is reached, the existing path generation algorithm are too simple, cannot be combined into a path and branch path with loop, or too cumbersome, too complicated arrangement generates a path is meaningless, for aerospace software testing is superfluous, I rely on our experience of ten load space, tailor developed a description of aerospace software UML graphics path generation algorithm.
NASA Technical Reports Server (NTRS)
Wolf, Stephen W. D.
1988-01-01
The Wall Adjustment Strategy (WAS) software provides successful on-line control of the 2-D flexible walled test section of the Langley 0.3-m Transonic Cryogenic Tunnel. This software package allows the level of operator intervention to be regulated as necessary for research and production type 2-D testing using and Adaptive Wall Test Section (AWTS). The software is designed to accept modification for future requirements, such as 3-D testing, with a minimum of complexity. The WAS software described is an attempt to provide a user friendly package which could be used to control any flexible walled AWTS. Control system constraints influence the details of data transfer, not the data type. Then this entire software package could be used in different control systems, if suitable interface software is available. A complete overview of the software highlights the data flow paths, the modular architecture of the software and the various operating and analysis modes available. A detailed description of the software modules includes listings of the code. A user's manual is provided to explain task generation, operating environment, user options and what to expect at execution.
Component Prioritization Schema for Achieving Maximum Time and Cost Benefits from Software Testing
NASA Astrophysics Data System (ADS)
Srivastava, Praveen Ranjan; Pareek, Deepak
Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Defining the end of software testing represents crucial features of any software development project. A premature release will involve risks like undetected bugs, cost of fixing faults later, and discontented customers. Any software organization would want to achieve maximum possible benefits from software testing with minimum resources. Testing time and cost need to be optimized for achieving a competitive edge in the market. In this paper, we propose a schema, called the Component Prioritization Schema (CPS), to achieve an effective and uniform prioritization of the software components. This schema serves as an extension to the Non Homogenous Poisson Process based Cumulative Priority Model. We also introduce an approach for handling time-intensive versus cost-intensive projects.
NASA Technical Reports Server (NTRS)
Tamayo, Tak Chai
1987-01-01
Quality of software not only is vital to the successful operation of the space station, it is also an important factor in establishing testing requirements, time needed for software verification and integration as well as launching schedules for the space station. Defense of management decisions can be greatly strengthened by combining engineering judgments with statistical analysis. Unlike hardware, software has the characteristics of no wearout and costly redundancies, thus making traditional statistical analysis not suitable in evaluating reliability of software. A statistical model was developed to provide a representation of the number as well as types of failures occur during software testing and verification. From this model, quantitative measure of software reliability based on failure history during testing are derived. Criteria to terminate testing based on reliability objectives and methods to estimate the expected number of fixings required are also presented.
Statistics of software vulnerability detection in certification testing
NASA Astrophysics Data System (ADS)
Barabanov, A. V.; Markov, A. S.; Tsirlov, V. L.
2018-05-01
The paper discusses practical aspects of introduction of the methods to detect software vulnerability in the day-to-day activities of the accredited testing laboratory. It presents the approval results of the vulnerability detection methods as part of the study of the open source software and the software that is a test object of the certification tests under information security requirements, including software for communication networks. Results of the study showing the allocation of identified vulnerabilities by types of attacks, country of origin, programming languages used in the development, methods for detecting vulnerability, etc. are given. The experience of foreign information security certification systems related to the detection of certified software vulnerabilities is analyzed. The main conclusion based on the study is the need to implement practices for developing secure software in the development life cycle processes. The conclusions and recommendations for the testing laboratories on the implementation of the vulnerability analysis methods are laid down.
Integrated testing and verification system for research flight software
NASA Technical Reports Server (NTRS)
Taylor, R. N.
1979-01-01
The MUST (Multipurpose User-oriented Software Technology) program is being developed to cut the cost of producing research flight software through a system of software support tools. An integrated verification and testing capability was designed as part of MUST. Documentation, verification and test options are provided with special attention on real-time, multiprocessing issues. The needs of the entire software production cycle were considered, with effective management and reduced lifecycle costs as foremost goals.
NASA Technical Reports Server (NTRS)
Hebert, Phillip W., Sr.; Hughes, Mark S.; Davis, Dawn M.; Turowski, Mark P.; Holladay, Wendy T.; Marshall, PeggL.; Duncan, Michael E.; Morris, Jon A.; Franzl, Richard W.
2012-01-01
The advent of the commercial space launch industry and NASA's more recent resumption of operation of Stennis Space Center's large test facilities after thirty years of contractor control resulted in a need for a non-proprietary data acquisition system (DAS) software to support government and commercial testing. The software is designed for modularity and adaptability to minimize the software development effort for current and future data systems. An additional benefit of the software's architecture is its ability to easily migrate to other testing facilities thus providing future commonality across Stennis. Adapting the software to other Rocket Propulsion Test (RPT) Centers such as MSFC, White Sands, and Plumbrook Station would provide additional commonality and help reduce testing costs for NASA. Ultimately, the software provides the government with unlimited rights and guarantees privacy of data to commercial entities. The project engaged all RPT Centers and NASA's Independent Verification & Validation facility to enhance product quality. The design consists of a translation layer which provides the transparency of the software application layers to underlying hardware regardless of test facility location and a flexible and easily accessible database. This presentation addresses system technical design, issues encountered, and the status of Stennis' development and deployment.
Abdominal Aortic Aneurysms in “High-Risk” Surgical Patients
Jordan, William D.; Alcocer, Francisco; Wirthlin, Douglas J.; Westfall, Andrew O.; Whitley, David
2003-01-01
Objective To evaluate the early results of endovascular grafting for high-risk surgical candidates in the treatment of abdominal aortic aneurysms (AAA). Summary Background Data Since the approval of endoluminal grafts for treatment of AAA, endovascular repair of AAA (EVAR) has expanded to include patients originally considered too ill for open AAA repair. However, some concern has been expressed regarding technical failure and the durability of endovascular grafts. Methods The University of Alabama at Birmingham (UAB) Computerized Vascular Registry identified all patients who underwent abdominal aneurysm repair between January 1, 2000, and June 12, 2002. Patients were stratified by type of repair (open AAA vs. EVAR) and were classified as low risk or high risk. Patients with at least one of the following classifications were classified as high risk: age more than 80 years, chronic renal failure (creatinine > 2.0), compromised cardiac function (diminished ventricular function or severe coronary artery disease), poor pulmonary function, reoperative aortic procedure, a “hostile” abdomen, or an emergency operation. Death, systemic complications, and length of stay were tabulated for each group. Results During this 28-month period, 404 patients underwent AAA repair at UAB. Eighteen patients (4.5%) died within 30 days of their repair or during the same hospitalization. Two hundred seventeen patients (53%) were classified as high risk. Two hundred fifty-nine patients (64%) underwent EVAR repair, and 130 (50%) of these were considered high-risk patients (including four emergency procedures). One hundred forty-five patients (36%) underwent open AAA repair, including 15 emergency operations. All deaths occurred in the high-risk group: 12 (8.3%) died after open AAA repair and 6 (2.3%) died after EVAR repair. Postoperative length of stay was shorter for EVAR repair compared to open AAA. Conclusions High-risk and low-risk patients can undergo EVAR repair with a lower rate of short-term systemic complications and a shorter length of stay compared to open AAA. Despite concern regarding the durability of EVAR, high-risk patients should be evaluated for EVAR repair before committing to open AAA repair. PMID:12724628
Surgical Robotics Research in Cardiovascular Disease
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pohost, Gerald M; Guthrie, Barton L; Steiner, Charles
This grant is to support a research in robotics at three major medical centers: the University of Southern California-USC- (Project 1); the University of Alabama at Birmingham-UAB-(Project 2); and the Cleveland Clinic Foundation-CCF-(Project 3). Project 1 is oriented toward cardiovascular applications, while projects 2 and 3 are oriented toward neurosurgical applications. The main objective of Project 1 is to develop an approach to assist patients in maintaining a constant level of stress while undergoing magnetic resonance imaging or spectroscopy. The specific project is to use handgrip to detect the changes in high energy phosphate metabolism between rest and stress. Themore » high energy phosphates, ATP and phosphocreatine (PCr) are responsible for the energy of the heart muscle (myocardium) responsible for its contractile function. If the blood supply to the myocardium in insufficient to support metabolism and contractility during stress, the high energy phosphates, particularly PCr, will decrease in concentration. The high energy phosphates can be tracked using phosphorus-31 magnetic resonance spectroscopy ({sup 31}P MRS). In Project 2 the UAB Surgical Robotics project focuses on the use of virtual presence to assist with remote surgery and surgical training. The goal of this proposal was to assemble a pilot system for proof of concept. The pilot project was completed successfully and was judged to demonstrate that the concept of remote surgical assistance as applied to surgery and surgical training was feasible and warranted further development. The main objective of Project 3 is to develop a system to allow for the tele-robotic delivery of instrumentation during a functional neurosurgical procedure (Figure 3). Instrumentation such as micro-electrical recording probes or deep brain stimulation leads. Current methods for the delivery of these instruments involve the integration of linear actuators to stereotactic navigation systems. The control of these delivery devices utilizes an open-loop configuration involving a team consisting of neurosurgeon, neurologist and neurophysiologist all present and participating in the decision process of delivery. We propose the development of an integrated system which provides for distributed decision making and tele-manipulation of the instrument delivery system.« less
NASA Technical Reports Server (NTRS)
Al-Hamdan, Mohammad; Crosson, William; Economou, Sigrid; Estes, Marice Jr; Estes, Sue; Hemmings, Sarah; Kent, Shia; Puckett, Mark; Quattrochi, Dale; Wade, Gina
2013-01-01
NASA Marshall Space Flight Center is collaborating with the University of Alabama at Birmingham (UAB) School of Public Health and the Centers for Disease Control and Prevention (CDC) National Center for Public Health Informatics to address issues of environmental health and enhance public health decision-making using NASA remotely-sensed data and products. The objectives of this study are to develop high-quality spatial data sets of environmental variables, link these with public health data from a national cohort study, and deliver the linked data sets and associated analyses to local, state and federal end-user groups. Three daily environmental data sets were developed for the conterminous U.S. on different spatial resolutions for the period 2003-2008: (1) spatial surfaces of estimated fine particulate matter (PM2.5) exposures on a 10-km grid using the US Environmental Protection Agency (EPA) ground observations and NASA's MODerate-resolution Imaging Spectroradiometer (MODIS) data; (2) a 1-km grid of Land Surface Temperature (LST) using MODIS data; and (3) a 12-km grid of daily Incoming Solar Radiation (Insolation) and heat-related products using the North American Land Data Assimilation System (NLDAS) forcing data. These environmental data sets were linked with public health data from the UAB REasons for Geographic And Racial Differences in Stroke (REGARDS) national cohort study to determine whether exposures to these environmental risk factors are related to cognitive decline, stroke and other health outcomes. These environmental datasets and the results of the public health linkage analyses will be disseminated to end-users for decision-making through the CDC Wide-ranging Online Data for Epidemiologic Research (WONDER) system and through peer-reviewed publications respectively. The linkage of these data with the CDC WONDER system substantially expands public access to NASA data, making their use by a wide range of decision makers feasible. By successful completion of this research, decision-making activities, including policy-making and clinical decision-making, can be positively affected through utilization of the data products and analyses provided on the CDC WONDER system.
15 CFR 995.27 - Format validation software testing.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Format validation software testing... CERTIFICATION REQUIREMENTS FOR NOAA HYDROGRAPHIC PRODUCTS AND SERVICES CERTIFICATION REQUIREMENTS FOR... of NOAA ENC Products § 995.27 Format validation software testing. Tests shall be performed verifying...
The Design of Software for Three-Phase Induction Motor Test System
NASA Astrophysics Data System (ADS)
Haixiang, Xu; Fengqi, Wu; Jiai, Xue
2017-11-01
The design and development of control system software is important to three-phase induction motor test equipment, which needs to be completely familiar with the test process and the control procedure of test equipment. In this paper, the software is developed according to the national standard (GB/T1032-2005) about three-phase induction motor test method by VB language. The control system and data analysis software and the implement about motor test system are described individually, which has the advantages of high automation and high accuracy.
NASA Astrophysics Data System (ADS)
Brouwer, Albert; Brown, David; Tomuta, Elena
2017-04-01
To detect nuclear explosions, waveform data from over 240 SHI stations world-wide flows into the International Data Centre (IDC) of the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO), located in Vienna, Austria. A complex pipeline of software applications processes this data in numerous ways to form event hypotheses. The software codebase comprises over 2 million lines of code, reflects decades of development, and is subject to frequent enhancement and revision. Since processing must run continuously and reliably, software changes are subjected to thorough testing before being put into production. To overcome the limitations and cost of manual testing, the Continuous Automated Testing System (CATS) has been created. CATS provides an isolated replica of the IDC processing environment, and is able to build and test different versions of the pipeline software directly from code repositories that are placed under strict configuration control. Test jobs are scheduled automatically when code repository commits are made. Regressions are reported. We present the CATS design choices and test methods. Particular attention is paid to how the system accommodates the individual testing of strongly interacting software components that lack test instrumentation.
Analysis of key technologies for virtual instruments metrology
NASA Astrophysics Data System (ADS)
Liu, Guixiong; Xu, Qingui; Gao, Furong; Guan, Qiuju; Fang, Qiang
2008-12-01
Virtual instruments (VIs) require metrological verification when applied as measuring instruments. Owing to the software-centered architecture, metrological evaluation of VIs includes two aspects: measurement functions and software characteristics. Complexity of software imposes difficulties on metrological testing of VIs. Key approaches and technologies for metrology evaluation of virtual instruments are investigated and analyzed in this paper. The principal issue is evaluation of measurement uncertainty. The nature and regularity of measurement uncertainty caused by software and algorithms can be evaluated by modeling, simulation, analysis, testing and statistics with support of powerful computing capability of PC. Another concern is evaluation of software features like correctness, reliability, stability, security and real-time of VIs. Technologies from software engineering, software testing and computer security domain can be used for these purposes. For example, a variety of black-box testing, white-box testing and modeling approaches can be used to evaluate the reliability of modules, components, applications and the whole VI software. The security of a VI can be assessed by methods like vulnerability scanning and penetration analysis. In order to facilitate metrology institutions to perform metrological verification of VIs efficiently, an automatic metrological tool for the above validation is essential. Based on technologies of numerical simulation, software testing and system benchmarking, a framework for the automatic tool is proposed in this paper. Investigation on implementation of existing automatic tools that perform calculation of measurement uncertainty, software testing and security assessment demonstrates the feasibility of the automatic framework advanced.
Sustaining Software-Intensive Systems
2006-05-01
2.2 Multi- Service Operational Test and Evaluation .......................................4 2.3 Stable Software Baseline...or equivalent document • completed Multi- Service Operational Test and Evaluation (MOT&E) for the potential production software package (or OT&E if...not multi- service ) • stable software production baseline • complete and current software documentation • Authority to Operate (ATO) for an
Artificial intelligence and expert systems in-flight software testing
NASA Technical Reports Server (NTRS)
Demasie, M. P.; Muratore, J. F.
1991-01-01
The authors discuss the introduction of advanced information systems technologies such as artificial intelligence, expert systems, and advanced human-computer interfaces directly into Space Shuttle software engineering. The reconfiguration automation project (RAP) was initiated to coordinate this move towards 1990s software technology. The idea behind RAP is to automate several phases of the flight software testing procedure and to introduce AI and ES into space shuttle flight software testing. In the first phase of RAP, conventional tools to automate regression testing have already been developed or acquired. There are currently three tools in use.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-22
... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Test Documentation for Digital Computer Software...) is issuing for public comment draft regulatory guide (DG), DG-1207, ``Test Documentation for Digital... practices for test documentation for software and computer systems as described in the Institute of...
Validation and Verification of LADEE Models and Software
NASA Technical Reports Server (NTRS)
Gundy-Burlet, Karen
2013-01-01
The Lunar Atmosphere Dust Environment Explorer (LADEE) mission will orbit the moon in order to measure the density, composition and time variability of the lunar dust environment. The ground-side and onboard flight software for the mission is being developed using a Model-Based Software methodology. In this technique, models of the spacecraft and flight software are developed in a graphical dynamics modeling package. Flight Software requirements are prototyped and refined using the simulated models. After the model is shown to work as desired in this simulation framework, C-code software is automatically generated from the models. The generated software is then tested in real time Processor-in-the-Loop and Hardware-in-the-Loop test beds. Travelling Road Show test beds were used for early integration tests with payloads and other subsystems. Traditional techniques for verifying computational sciences models are used to characterize the spacecraft simulation. A lightweight set of formal methods analysis, static analysis, formal inspection and code coverage analyses are utilized to further reduce defects in the onboard flight software artifacts. These techniques are applied early and often in the development process, iteratively increasing the capabilities of the software and the fidelity of the vehicle models and test beds.
Testing of Hand-Held Mine Detection Systems
2015-01-08
ITOP 04-2-5208 for guidance on software testing . Testing software is necessary to ensure that safety is designed into the software algorithm, and that...sensor verification areas or target lanes. F.2. TESTING OBJECTIVES. a. Testing objectives will impact on the test design . Some examples of...overall safety, performance, and reliability of the system. It describes activities necessary to ensure safety is designed into the system under test
NASA Technical Reports Server (NTRS)
Church, Victor E.; Long, D.; Hartenstein, Ray; Perez-Davila, Alfredo
1992-01-01
This report is one of a series discussing configuration management (CM) topics for Space Station ground systems software development. It provides a description of the Software Support Environment (SSE)-developed Software Test Management (STM) capability, and discusses the possible use of this capability for management of developed software during testing performed on target platforms. This is intended to supplement the formal documentation of STM provided by the SEE Project. How STM can be used to integrate contractor CM and formal CM for software before delivery to operations is described. STM provides a level of control that is flexible enough to support integration and debugging, but sufficiently rigorous to insure the integrity of the testing process.
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.
2016-01-01
Now more than ever, scientific results are dependent on sophisticated software and analysis. Why should we trust code written by others? How do you ensure your own code produces sensible results? How do you make sure it continues to do so as you update, modify, and add functionality? Software testing is an integral part of code validation and writing tests should be a requirement for any software project. I will talk about Python-based tools that make managing and running tests much easier and explore some statistics for projects hosted on GitHub that contain tests.
Academic Testing and Grading with Spreadsheet Software.
ERIC Educational Resources Information Center
Ho, James K.
1987-01-01
Explains how spreadsheet software can be used in the design and grading of academic tests and in assigning grades. Macro programs and menu-driven software are highlighted and an example using IBM PCs and Lotus 1-2-3 software is given. (Author/LRW)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vohra, Yogesh, K.
The role of nitrogen in the fabrication of designer diamond was systematically investigated by adding controlled amount of nitrogen in hydrogen/methane/oxygen plasma. This has led to a successful recipe for reproducible fabrication of designer diamond anvils for high-pressure high-temperature research in support of stockpile stewardship program. In the three-year support period, several designer diamonds fabricated with this new growth chemistry were utilized in high-pressure experiments at UAB and Lawrence Livermore National Laboratory. The designer diamond anvils were utilized in high-pressure studies on heavy rare earth metals, high pressure melting studies on metals, and electrical resistance measurements on iron-based layered superconductorsmore » under high pressures. The growth chemistry developed under NNSA support can be adapted for commercial production of designer diamonds.« less
Assessment Environment for Complex Systems Software Guide
NASA Technical Reports Server (NTRS)
2013-01-01
This Software Guide (SG) describes the software developed to test the Assessment Environment for Complex Systems (AECS) by the West Virginia High Technology Consortium (WVHTC) Foundation's Mission Systems Group (MSG) for the National Aeronautics and Space Administration (NASA) Aeronautics Research Mission Directorate (ARMD). This software is referred to as the AECS Test Project throughout the remainder of this document. AECS provides a framework for developing, simulating, testing, and analyzing modern avionics systems within an Integrated Modular Avionics (IMA) architecture. The purpose of the AECS Test Project is twofold. First, it provides a means to test the AECS hardware and system developed by MSG. Second, it provides an example project upon which future AECS research may be based. This Software Guide fully describes building, installing, and executing the AECS Test Project as well as its architecture and design. The design of the AECS hardware is described in the AECS Hardware Guide. Instructions on how to configure, build and use the AECS are described in the User's Guide. Sample AECS software, developed by the WVHTC Foundation, is presented in the AECS Software Guide. The AECS Hardware Guide, AECS User's Guide, and AECS Software Guide are authored by MSG. The requirements set forth for AECS are presented in the Statement of Work for the Assessment Environment for Complex Systems authored by NASA Dryden Flight Research Center (DFRC). The intended audience for this document includes software engineers, hardware engineers, project managers, and quality assurance personnel from WVHTC Foundation (the suppliers of the software), NASA (the customer), and future researchers (users of the software). Readers are assumed to have general knowledge in the field of real-time, embedded computer software development.
NASA Technical Reports Server (NTRS)
Soderstrom, Tomas J.; Krall, Laura A.; Hope, Sharon A.; Zupke, Brian S.
1994-01-01
A Telos study of 40 recent subsystem deliveries into the DSN at JPL found software interface testing to be the single most expensive and error-prone activity, and the study team suggested creating an automated software interface test tool. The resulting Software Interface Verifier (SIV), which was funded by NASA/JPL and created by Telos, employed 92 percent software reuse to quickly create an initial version which incorporated early user feedback. SIV is now successfully used by developers for interface prototyping and unit testing, by test engineers for formal testing, and by end users for non-intrusive data flow tests in the operational environment. Metrics, including cost, are included. Lessons learned include the need for early user training. SIV is ported to many platforms and can be successfully used or tailored by other NASA groups.
Proactive Security Testing and Fuzzing
NASA Astrophysics Data System (ADS)
Takanen, Ari
Software is bound to have security critical flaws, and no testing or code auditing can ensure that software is flaw-less. But software security testing requirements have improved radically during the past years, largely due to criticism from security conscious consumers and Enterprise customers. Whereas in the past, security flaws were taken for granted (and patches were quietly and humbly installed), they now are probably one of the most common reasons why people switch vendors or software providers. The maintenance costs from security updates often add to become one of the biggest cost items to large Enterprise users. Fortunately test automation techniques have also improved. Techniques like model-based testing (MBT) enable efficient generation of security tests that reach good confidence levels in discovering zero-day mistakes in software. This technique is called fuzzing.
NASA Astrophysics Data System (ADS)
Parra, Pablo; da Silva, Antonio; Polo, Óscar R.; Sánchez, Sebastián
2018-02-01
In this day and age, successful embedded critical software needs agile and continuous development and testing procedures. This paper presents the overall testing and code coverage metrics obtained during the unit testing procedure carried out to verify the correctness of the boot software that will run in the Instrument Control Unit (ICU) of the Energetic Particle Detector (EPD) on-board Solar Orbiter. The ICU boot software is a critical part of the project so its verification should be addressed at an early development stage, so any test case missed in this process may affect the quality of the overall on-board software. According to the European Cooperation for Space Standardization ESA standards, testing this kind of critical software must cover 100% of the source code statement and decision paths. This leads to the complete testing of fault tolerance and recovery mechanisms that have to resolve every possible memory corruption or communication error brought about by the space environment. The introduced procedure enables fault injection from the beginning of the development process and enables to fulfill the exigent code coverage demands on the boot software.
Florida alternative NTCIP testing software (ANTS) for actuated signal controllers.
DOT National Transportation Integrated Search
2009-01-01
The scope of this research project did include the development of a software tool to test devices for NTCIP compliance. Development of the Florida Alternative NTCIP Testing Software (ANTS) was developed by the research team due to limitations found w...
Taking advantage of ground data systems attributes to achieve quality results in testing software
NASA Technical Reports Server (NTRS)
Sigman, Clayton B.; Koslosky, John T.; Hageman, Barbara H.
1994-01-01
During the software development life cycle process, basic testing starts with the development team. At the end of the development process, an acceptance test is performed for the user to ensure that the deliverable is acceptable. Ideally, the delivery is an operational product with zero defects. However, the goal of zero defects is normally not achieved but is successful to various degrees. With the emphasis on building low cost ground support systems while maintaining a quality product, a key element in the test process is simulator capability. This paper reviews the Transportable Payload Operations Control Center (TPOCC) Advanced Spacecraft Simulator (TASS) test tool that is used in the acceptance test process for unmanned satellite operations control centers. The TASS is designed to support the development, test and operational environments of the Goddard Space Flight Center (GSFC) operations control centers. The TASS uses the same basic architecture as the operations control center. This architecture is characterized by its use of distributed processing, industry standards, commercial off-the-shelf (COTS) hardware and software components, and reusable software. The TASS uses much of the same TPOCC architecture and reusable software that the operations control center developer uses. The TASS also makes use of reusable simulator software in the mission specific versions of the TASS. Very little new software needs to be developed, mainly mission specific telemetry communication and command processing software. By taking advantage of the ground data system attributes, successful software reuse for operational systems provides the opportunity to extend the reuse concept into the test area. Consistency in test approach is a major step in achieving quality results.
ESO imaging survey: optical deep public survey
NASA Astrophysics Data System (ADS)
Mignano, A.; Miralles, J.-M.; da Costa, L.; Olsen, L. F.; Prandoni, I.; Arnouts, S.; Benoist, C.; Madejsky, R.; Slijkhuis, R.; Zaggia, S.
2007-02-01
This paper presents new five passbands (UBVRI) optical wide-field imaging data accumulated as part of the DEEP Public Survey (DPS) carried out as a public survey by the ESO Imaging Survey (EIS) project. Out of the 3 square degrees originally proposed, the survey covers 2.75 square degrees, in at least one band (normally R), and 1.00 square degrees in five passbands. The median seeing, as measured in the final stacked images, is 0.97 arcsec, ranging from 0.75 arcsec to 2.0 arcsec. The median limiting magnitudes (AB system, 2´´ aperture, 5σ detection limit) are UAB=25.65, BAB=25.54, VAB=25.18, RAB = 24.8 and IAB =24.12 mag, consistent with those proposed in the original survey design. The paper describes the observations and data reduction using the EIS Data Reduction System and its associated EIS/MVM library. The quality of the individual images were inspected, bad images discarded and the remaining used to produce final image stacks in each passband, from which sources have been extracted. Finally, the scientific quality of these final images and associated catalogs was assessed qualitatively by visual inspection and quantitatively by comparison of statistical measures derived from these data with those of other authors as well as model predictions, and from direct comparison with the results obtained from the reduction of the same dataset using an independent (hands-on) software system. Finally to illustrate one application of this survey, the results of a preliminary effort to identify sub-mJy radio sources are reported. To the limiting magnitude reached in the R and I passbands the success rate ranges from 66 to 81% (depending on the fields). These data are publicly available at CDS. Based on observations carried out at the European Southern Observatory, La Silla, Chile under program Nos. 164.O-0561, 169.A-0725, and 267.A-5729. Appendices A, B and C are only available in electronic form at http://www.aanda.org
NASA Astrophysics Data System (ADS)
Pellegrin, F.; Jeram, B.; Haucke, J.; Feyrin, S.
2016-07-01
The paper describes the introduction of a new automatized build and test infrastructure, based on the open-source software Jenkins1, into the ESO Very Large Telescope control software to replace the preexisting in-house solution. A brief introduction to software quality practices is given, a description of the previous solution, the limitations of it and new upcoming requirements. Modifications required to adapt the new system are described, how these were implemented to current software and the results obtained. An overview on how the new system may be used in future projects is also presented.
Field Test of Route Planning Software for Lunar Polar Missions
NASA Astrophysics Data System (ADS)
Horchler, A. D.; Cunningham, C.; Jones, H. L.; Arnett, D.; Fang, E.; Amoroso, E.; Otten, N.; Kitchell, F.; Holst, I.; Rock, G.; Whittaker, W.
2017-10-01
A novel field test paradigm has been developed to demonstrate and validate route planning software in the stark low-angled light and sweeping shadows a rover would experience at the poles of the Moon. Software, ConOps, and test results are presented.
1988-06-01
Based Software Engineering Project Course .............. 83 SSoftware Engineering, Software Engineering Concepts: The Importance of Object-Based...quality assurance, and independent system testing . The Chief Programmer is responsible for all software development activities, including prototyping...during the Requirements Analysis phase, the Preliminary Design, the Detailed Design, Coding and Unit Testing , CSC Integration and Testing , and informal
Software OT&E Guidelines. Volume 1. Software Test Manager’s Handbook
1981-02-01
on reverse side If neceeary and identify by block number) The Software OT&E Guidelines is a set of handbooks prepared by the Computer / Support Systems...is one of a set of handbooks prepared by the Computer /Support Systems Division of the Test and Evaluation Directorate, Air Force Test and Evaluation...15 E. Software Maintainability .. .. ........ ... 16 F. Standard Questionnaires. .. .. ....... .... 16 1. Operator- Computer Interface Evaluation
Cost-Sensitive Radial Basis Function Neural Network Classifier for Software Defect Prediction
Venkatesan, R.
2016-01-01
Effective prediction of software modules, those that are prone to defects, will enable software developers to achieve efficient allocation of resources and to concentrate on quality assurance activities. The process of software development life cycle basically includes design, analysis, implementation, testing, and release phases. Generally, software testing is a critical task in the software development process wherein it is to save time and budget by detecting defects at the earliest and deliver a product without defects to the customers. This testing phase should be carefully operated in an effective manner to release a defect-free (bug-free) software product to the customers. In order to improve the software testing process, fault prediction methods identify the software parts that are more noted to be defect-prone. This paper proposes a prediction approach based on conventional radial basis function neural network (RBFNN) and the novel adaptive dimensional biogeography based optimization (ADBBO) model. The developed ADBBO based RBFNN model is tested with five publicly available datasets from the NASA data program repository. The computed results prove the effectiveness of the proposed ADBBO-RBFNN classifier approach with respect to the considered metrics in comparison with that of the early predictors available in the literature for the same datasets. PMID:27738649
Cost-Sensitive Radial Basis Function Neural Network Classifier for Software Defect Prediction.
Kumudha, P; Venkatesan, R
Effective prediction of software modules, those that are prone to defects, will enable software developers to achieve efficient allocation of resources and to concentrate on quality assurance activities. The process of software development life cycle basically includes design, analysis, implementation, testing, and release phases. Generally, software testing is a critical task in the software development process wherein it is to save time and budget by detecting defects at the earliest and deliver a product without defects to the customers. This testing phase should be carefully operated in an effective manner to release a defect-free (bug-free) software product to the customers. In order to improve the software testing process, fault prediction methods identify the software parts that are more noted to be defect-prone. This paper proposes a prediction approach based on conventional radial basis function neural network (RBFNN) and the novel adaptive dimensional biogeography based optimization (ADBBO) model. The developed ADBBO based RBFNN model is tested with five publicly available datasets from the NASA data program repository. The computed results prove the effectiveness of the proposed ADBBO-RBFNN classifier approach with respect to the considered metrics in comparison with that of the early predictors available in the literature for the same datasets.
NASA Astrophysics Data System (ADS)
Wang, Qiang
2017-09-01
As an important part of software engineering, the software process decides the success or failure of software product. The design and development feature of security software process is discussed, so is the necessity and the present significance of using such process. Coordinating the function software, the process for security software and its testing are deeply discussed. The process includes requirement analysis, design, coding, debug and testing, submission and maintenance. In each process, the paper proposed the subprocesses to support software security. As an example, the paper introduces the above process into the power information platform.
The Rapid Integration and Test Environment: A Process for Achieving Software Test Acceptance
2010-05-01
Test Environment : A Process for Achieving Software Test Acceptance 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...mlif`v= 365= k^s^i=mlpqdo^ar^qb=p`elli= The Rapid Integration and Test Environment : A Process for Achieving Software Test Acceptance Patrick V...was awarded the Bronze Star. Introduction The Rapid Integration and Test Environment (RITE) initiative, implemented by the Program Executive Office
IMCS reflight certification requirements and design specifications
NASA Technical Reports Server (NTRS)
1984-01-01
The requirements for reflight certification are established. Software requirements encompass the software programs that are resident in the PCC, DEP, PDSS, EC, or any related GSE. A design approach for the reflight software packages is recommended. These designs will be of sufficient detail to permit the implementation of reflight software. The PDSS/IMC Reflight Certification system provides the tools and mechanisms for the user to perform the reflight certification test procedures, test data capture, test data display, and test data analysis. The system as defined will be structured to permit maximum automation of reflight certification procedures and test data analysis.
Cassini's Test Methodology for Flight Software Verification and Operations
NASA Technical Reports Server (NTRS)
Wang, Eric; Brown, Jay
2007-01-01
The Cassini spacecraft was launched on 15 October 1997 on a Titan IV-B launch vehicle. The spacecraft is comprised of various subsystems, including the Attitude and Articulation Control Subsystem (AACS). The AACS Flight Software (FSW) and its development has been an ongoing effort, from the design, development and finally operations. As planned, major modifications to certain FSW functions were designed, tested, verified and uploaded during the cruise phase of the mission. Each flight software upload involved extensive verification testing. A standardized FSW testing methodology was used to verify the integrity of the flight software. This paper summarizes the flight software testing methodology used for verifying FSW from pre-launch through the prime mission, with an emphasis on flight experience testing during the first 2.5 years of the prime mission (July 2004 through January 2007).
Mars Science Laboratory Flight Software Boot Robustness Testing Project Report
NASA Technical Reports Server (NTRS)
Roth, Brian
2011-01-01
On the surface of Mars, the Mars Science Laboratory will boot up its flight computers every morning, having charged the batteries through the night. This boot process is complicated, critical, and affected by numerous hardware states that can be difficult to test. The hardware test beds do not facilitate testing a long duration of back-to-back unmanned automated tests, and although the software simulation has provided the necessary functionality and fidelity for this boot testing, there has not been support for the full flexibility necessary for this task. Therefore to perform this testing a framework has been build around the software simulation that supports running automated tests loading a variety of starting configurations for software and hardware states. This implementation has been tested against the nominal cases to validate the methodology, and support for configuring off-nominal cases is ongoing. The implication of this testing is that the introduction of input configurations that have yet proved difficult to test may reveal boot scenarios worth higher fidelity investigation, and in other cases increase confidence in the robustness of the flight software boot process.
Experiments in fault tolerant software reliability
NASA Technical Reports Server (NTRS)
Mcallister, David F.; Tai, K. C.; Vouk, Mladen A.
1987-01-01
The reliability of voting was evaluated in a fault-tolerant software system for small output spaces. The effectiveness of the back-to-back testing process was investigated. Version 3.0 of the RSDIMU-ATS, a semi-automated test bed for certification testing of RSDIMU software, was prepared and distributed. Software reliability estimation methods based on non-random sampling are being studied. The investigation of existing fault-tolerance models was continued and formulation of new models was initiated.
Multi-version software reliability through fault-avoidance and fault-tolerance
NASA Technical Reports Server (NTRS)
Vouk, Mladen A.; Mcallister, David F.
1989-01-01
A number of experimental and theoretical issues associated with the practical use of multi-version software to provide run-time tolerance to software faults were investigated. A specialized tool was developed and evaluated for measuring testing coverage for a variety of metrics. The tool was used to collect information on the relationships between software faults and coverage provided by the testing process as measured by different metrics (including data flow metrics). Considerable correlation was found between coverage provided by some higher metrics and the elimination of faults in the code. Back-to-back testing was continued as an efficient mechanism for removal of un-correlated faults, and common-cause faults of variable span. Software reliability estimation methods was also continued based on non-random sampling, and the relationship between software reliability and code coverage provided through testing. New fault tolerance models were formulated. Simulation studies of the Acceptance Voting and Multi-stage Voting algorithms were finished and it was found that these two schemes for software fault tolerance are superior in many respects to some commonly used schemes. Particularly encouraging are the safety properties of the Acceptance testing scheme.
IHE cross-enterprise document sharing for imaging: interoperability testing software
2010-01-01
Background With the deployments of Electronic Health Records (EHR), interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. Results In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE) testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I) integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. Conclusions EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties. PMID:20858241
IHE cross-enterprise document sharing for imaging: interoperability testing software.
Noumeir, Rita; Renaud, Bérubé
2010-09-21
With the deployments of Electronic Health Records (EHR), interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE) testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I) integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties.
15 CFR 740.9 - Temporary imports, exports, and reexports (TMP).
Code of Federal Regulations, 2013 CFR
2013-01-01
... the end of the beta test period as defined by the software producer or, if the software producer does... States; and exports and reexports of beta test software. (a) Temporary exports and reexports—(1) Scope. You may export and reexport commodities and software for temporary use abroad (including use in...
15 CFR 740.9 - Temporary imports, exports, and reexports (TMP).
Code of Federal Regulations, 2012 CFR
2012-01-01
... the end of the beta test period as defined by the software producer or, if the software producer does... States; and exports and reexports of beta test software. (a) Temporary exports and reexports—(1) Scope. You may export and reexport commodities and software for temporary use abroad (including use in...
Mars Science Laboratory Boot Robustness Testing
NASA Technical Reports Server (NTRS)
Banazadeh, Payam; Lam, Danny
2011-01-01
Mars Science Laboratory (MSL) is one of the most complex spacecrafts in the history of mankind. Due to the nature of its complexity, a large number of flight software (FSW) requirements have been written for implementation. In practice, these requirements necessitate very complex and very precise flight software with no room for error. One of flight software's responsibilities is to be able to boot up and check the state of all devices on the spacecraft after the wake up process. This boot up and initialization is crucial to the mission success since any misbehavior of different devices needs to be handled through the flight software. I have created a test toolkit that allows the FSW team to exhaustively test the flight software under variety of different unexpected scenarios and validate that flight software can handle any situation after booting up. The test includes initializing different devices on spacecraft to different configurations and validate at the end of the flight software boot up that the flight software has initialized those devices to what they are suppose to be in that particular scenario.
Supporting the Use of CERT (registered trademark) Secure Coding Standards in DoD Acquisitions
2012-07-01
Capability Maturity Model IntegrationSM (CMMI®) [Davis 2009]. SM Team Software Process, TSP, and Capability Maturity Model Integration are service...STP Software Test Plan TEP Test and Evaluation Plan TSP Team Software Process V & V verification and validation CMU/SEI-2012-TN-016 | 47...Supporting the Use of CERT® Secure Coding Standards in DoD Acquisitions Tim Morrow ( Software Engineering Institute) Robert Seacord ( Software
Tools for Embedded Computing Systems Software
NASA Technical Reports Server (NTRS)
1978-01-01
A workshop was held to assess the state of tools for embedded systems software and to determine directions for tool development. A synopsis of the talk and the key figures of each workshop presentation, together with chairmen summaries, are presented. The presentations covered four major areas: (1) tools and the software environment (development and testing); (2) tools and software requirements, design, and specification; (3) tools and language processors; and (4) tools and verification and validation (analysis and testing). The utility and contribution of existing tools and research results for the development and testing of embedded computing systems software are described and assessed.
Firing Room Remote Application Software Development
NASA Technical Reports Server (NTRS)
Liu, Kan
2015-01-01
The Engineering and Technology Directorate (NE) at National Aeronautics and Space Administration (NASA) Kennedy Space Center (KSC) is designing a new command and control system for the checkout and launch of Space Launch System (SLS) and future rockets. The purposes of the semester long internship as a remote application software developer include the design, development, integration, and verification of the software and hardware in the firing rooms, in particular with the Mobile Launcher (ML) Launch Accessories (LACC) subsystem. In addition, a software test verification procedure document was created to verify and checkout LACC software for Launch Equipment Test Facility (LETF) testing.
1992-04-01
contractor’s existing data collection, analysis and corrective action system shall be utilized, with modification only as necessary to meet the...either from test or from analysis of field data . The procedures of MIL-STD-756B assume that the reliability of a 18 DEFINE IDENTIFY SOFTWARE LIFE CYCLE...to generate sufficient data to report a statistically valid reliability figure for a class of software. Casual data gathering accumulates data more
System Testing of Ground Cooling System Components
NASA Technical Reports Server (NTRS)
Ensey, Tyler Steven
2014-01-01
This internship focused primarily upon software unit testing of Ground Cooling System (GCS) components, one of the three types of tests (unit, integrated, and COTS/regression) utilized in software verification. Unit tests are used to test the software of necessary components before it is implemented into the hardware. A unit test determines that the control data, usage procedures, and operating procedures of a particular component are tested to determine if the program is fit for use. Three different files are used to make and complete an efficient unit test. These files include the following: Model Test file (.mdl), Simulink SystemTest (.test), and autotest (.m). The Model Test file includes the component that is being tested with the appropriate Discrete Physical Interface (DPI) for testing. The Simulink SystemTest is a program used to test all of the requirements of the component. The autotest tests that the component passes Model Advisor and System Testing, and puts the results into proper files. Once unit testing is completed on the GCS components they can then be implemented into the GCS Schematic and the software of the GCS model as a whole can be tested using integrated testing. Unit testing is a critical part of software verification; it allows for the testing of more basic components before a model of higher fidelity is tested, making the process of testing flow in an orderly manner.
Hadlich, Marcelo Souza; Oliveira, Gláucia Maria Moraes; Feijóo, Raúl A; Azevedo, Clerio F; Tura, Bernardo Rangel; Ziemer, Paulo Gustavo Portela; Blanco, Pablo Javier; Pina, Gustavo; Meira, Márcio; Souza e Silva, Nelson Albuquerque de
2012-10-01
The standardization of images used in Medicine in 1993 was performed using the DICOM (Digital Imaging and Communications in Medicine) standard. Several tests use this standard and it is increasingly necessary to design software applications capable of handling this type of image; however, these software applications are not usually free and open-source, and this fact hinders their adjustment to most diverse interests. To develop and validate a free and open-source software application capable of handling DICOM coronary computed tomography angiography images. We developed and tested the ImageLab software in the evaluation of 100 tests randomly selected from a database. We carried out 600 tests divided between two observers using ImageLab and another software sold with Philips Brilliance computed tomography appliances in the evaluation of coronary lesions and plaques around the left main coronary artery (LMCA) and the anterior descending artery (ADA). To evaluate intraobserver, interobserver and intersoftware agreements, we used simple and kappa statistics agreements. The agreements observed between software applications were generally classified as substantial or almost perfect in most comparisons. The ImageLab software agreed with the Philips software in the evaluation of coronary computed tomography angiography tests, especially in patients without lesions, with lesions < 50% in the LMCA and < 70% in the ADA. The agreement for lesions > 70% in the ADA was lower, but this is also observed when the anatomical reference standard is used.
Writing executable assertions to test flight software
NASA Technical Reports Server (NTRS)
Mahmood, A.; Andrews, D. M.; Mccluskey, E. J.
1984-01-01
An executable assertion is a logical statement about the variables or a block of code. If there is no error during execution, the assertion statement results in a true value. Executable assertions can be used for dynamic testing of software. They can be employed for validation during the design phase, and exception and error detection during the operation phase. The present investigation is concerned with the problem of writing executable assertions, taking into account the use of assertions for testing flight software. They can be employed for validation during the design phase, and for exception handling and error detection during the operation phase The digital flight control system and the flight control software are discussed. The considered system provides autopilot and flight director modes of operation for automatic and manual control of the aircraft during all phases of flight. Attention is given to techniques for writing and using assertions to test flight software, an experimental setup to test flight software, and language features to support efficient use of assertions.
HPC Software Stack Testing Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garvey, Cormac
The HPC Software stack testing framework (hpcswtest) is used in the INL Scientific Computing Department to test the basic sanity and integrity of the HPC Software stack (Compilers, MPI, Numerical libraries and Applications) and to quickly discover hard failures, and as a by-product it will indirectly check the HPC infrastructure (network, PBS and licensing servers).
Testing of Safety-Critical Software Embedded in an Artificial Heart
NASA Astrophysics Data System (ADS)
Cha, Sungdeok; Jeong, Sehun; Yoo, Junbeom; Kim, Young-Gab
Software is being used more frequently to control medical devices such as artificial heart or robotic surgery system. While much of software safety issues in such systems are similar to other safety-critical systems (e.g., nuclear power plants), domain-specific properties may warrant development of customized techniques to demonstrate fitness of the system on patients. In this paper, we report results of a preliminary analysis done on software controlling a Hybrid Ventricular Assist Device (H-VAD) developed by Korea Artificial Organ Centre (KAOC). It is a state-of-the-art artificial heart which completed animal testing phase. We performed software testing in in-vitro experiments and animal experiments. An abnormal behaviour, never detected during extensive in-vitro analysis and animal testing, was found.
A program downloader and other utility software for the DATAC bus monitor unit
NASA Technical Reports Server (NTRS)
Novacki, Stanley M., III
1987-01-01
A set or programs designed to facilitate software testing on the DATAC Bus Monitor is described. By providing a means to simplify program loading, firmware generation, and subsequent testing of programs, the overhead involved in software evaluation is reduced and that time is used more productively in performance, analysis and improvement of current software.
Risk-Based Object Oriented Testing
NASA Technical Reports Server (NTRS)
Rosenberg, Linda H.; Stapko, Ruth; Gallo, Albert
2000-01-01
Software testing is a well-defined phase of the software development life cycle. Functional ("black box") testing and structural ("white box") testing are two methods of test case design commonly used by software developers. A lesser known testing method is risk-based testing, which takes into account the probability of failure of a portion of code as determined by its complexity. For object oriented programs, a methodology is proposed for identification of risk-prone classes. Risk-based testing is a highly effective testing technique that can be used to find and fix the most important problems as quickly as possible.
1979-08-21
Appendix s - Outline and Draft Material for Proposed Triservice Interim Guideline on Application of Software Acceptance Criteria....... 269 Appendix 9...AND DRAFT MATERIAL FOR PROPOSED TRISERVICE INTERIM GUIDELINE ON APPLICATION OF SOFTWARE ACCEPTANCE CRITERIA I I INTRODUCTION The purpose of this guide...contract item (CPCI) (code) 5. CPCI test plan 6. CPCI test procedures 7. CPCI test report 8. Handbooks and manuals. Al though additional material does
NASA Astrophysics Data System (ADS)
Kohno, Wataru; Kirikoshi, Akimitsu; Kita, Takafumi
2018-03-01
We construct a variational ground-state wave function of weakly interacting M-component Bose-Einstein condensates beyond the mean-field theory by incorporating the dynamical 3/2-body processes, where one of the two colliding particles drops into the condensate and vice versa. Our numerical results with various masses and particle numbers show that the 3/2-body processes between different particles make finite contributions to lowering the ground-state energy, implying that many-body correlation effects between different particles are essential even in the weak-coupling regime of the Bose-Einstein condensates. We also consider the stability condition for 2-component miscible states using the new ground-state wave function. Through this calculation, we obtain the relation UAB2/UAAUBB < 1 + α , where Uij is the effective contact potential between particles i and j and α is the correction, which originates from the 3/2- and 2-body processes.
A taxonomy and discussion of software attack technologies
NASA Astrophysics Data System (ADS)
Banks, Sheila B.; Stytz, Martin R.
2005-03-01
Software is a complex thing. It is not an engineering artifact that springs forth from a design by simply following software coding rules; creativity and the human element are at the heart of the process. Software development is part science, part art, and part craft. Design, architecture, and coding are equally important activities and in each of these activities, errors may be introduced that lead to security vulnerabilities. Therefore, inevitably, errors enter into the code. Some of these errors are discovered during testing; however, some are not. The best way to find security errors, whether they are introduced as part of the architecture development effort or coding effort, is to automate the security testing process to the maximum extent possible and add this class of tools to the tools available, which aids in the compilation process, testing, test analysis, and software distribution. Recent technological advances, improvements in computer-generated forces (CGFs), and results in research in information assurance and software protection indicate that we can build a semi-intelligent software security testing tool. However, before we can undertake the security testing automation effort, we must understand the scope of the required testing, the security failures that need to be uncovered during testing, and the characteristics of the failures. Therefore, we undertook the research reported in the paper, which is the development of a taxonomy and a discussion of software attacks generated from the point of view of the security tester with the goal of using the taxonomy to guide the development of the knowledge base for the automated security testing tool. The representation for attacks and threat cases yielded by this research captures the strategies, tactics, and other considerations that come into play during the planning and execution of attacks upon application software. The paper is organized as follows. Section one contains an introduction to our research and a discussion of the motivation for our work. Section two contains a presents our taxonomy of software attacks and a discussion of the strategies employed and general weaknesses exploited for each attack. Section three contains a summary and suggestions for further research.
PDSS/IMC qualification test software acceptance procedures
NASA Technical Reports Server (NTRS)
1984-01-01
Tests to be performed for qualifying the payload development support system image motion compensator (IMC) are identified. The performance of these tests will verify the IMC interfaces and thereby verify the qualification test software.
DSN system performance test software
NASA Technical Reports Server (NTRS)
Martin, M.
1978-01-01
The system performance test software is currently being modified to include additional capabilities and enhancements. Additional software programs are currently being developed for the Command Store and Forward System and the Automatic Total Recall System. The test executive is the main program. It controls the input and output of the individual test programs by routing data blocks and operator directives to those programs. It also processes data block dump requests from the operator.
Test Driven Development of Scientific Models
NASA Technical Reports Server (NTRS)
Clune, Thomas L.
2012-01-01
Test-Driven Development (TDD) is a software development process that promises many advantages for developer productivity and has become widely accepted among professional software engineers. As the name suggests, TDD practitioners alternate between writing short automated tests and producing code that passes those tests. Although this overly simplified description will undoubtedly sound prohibitively burdensome to many uninitiated developers, the advent of powerful unit-testing frameworks greatly reduces the effort required to produce and routinely execute suites of tests. By testimony, many developers find TDD to be addicting after only a few days of exposure, and find it unthinkable to return to previous practices. Of course, scientific/technical software differs from other software categories in a number of important respects, but I nonetheless believe that TDD is quite applicable to the development of such software and has the potential to significantly improve programmer productivity and code quality within the scientific community. After a detailed introduction to TDD, I will present the experience within the Software Systems Support Office (SSSO) in applying the technique to various scientific applications. This discussion will emphasize the various direct and indirect benefits as well as some of the difficulties and limitations of the methodology. I will conclude with a brief description of pFUnit, a unit testing framework I co-developed to support test-driven development of parallel Fortran applications.
Applications of Logic Coverage Criteria and Logic Mutation to Software Testing
ERIC Educational Resources Information Center
Kaminski, Garrett K.
2011-01-01
Logic is an important component of software. Thus, software logic testing has enjoyed significant research over a period of decades, with renewed interest in the last several years. One approach to detecting logic faults is to create and execute tests that satisfy logic coverage criteria. Another approach to detecting faults is to perform mutation…
NASA Astrophysics Data System (ADS)
Georgiev, Bozhidar; Georgieva, Adriana
2013-12-01
In this paper, are presented some possibilities concerning the implementation of a test-driven development as a programming method. Here is offered a different point of view for creation of advanced programming techniques (build tests before programming source with all necessary software tools and modules respectively). Therefore, this nontraditional approach for easier programmer's work through building tests at first is preferable way of software development. This approach allows comparatively simple programming (applied with different object-oriented programming languages as for example JAVA, XML, PYTHON etc.). It is predictable way to develop software tools and to provide help about creating better software that is also easier to maintain. Test-driven programming is able to replace more complicated casual paradigms, used by many programmers.
Software reliability through fault-avoidance and fault-tolerance
NASA Technical Reports Server (NTRS)
Vouk, Mladen A.; Mcallister, David F.
1993-01-01
Strategies and tools for the testing, risk assessment and risk control of dependable software-based systems were developed. Part of this project consists of studies to enable the transfer of technology to industry, for example the risk management techniques for safety-concious systems. Theoretical investigations of Boolean and Relational Operator (BRO) testing strategy were conducted for condition-based testing. The Basic Graph Generation and Analysis tool (BGG) was extended to fully incorporate several variants of the BRO metric. Single- and multi-phase risk, coverage and time-based models are being developed to provide additional theoretical and empirical basis for estimation of the reliability and availability of large, highly dependable software. A model for software process and risk management was developed. The use of cause-effect graphing for software specification and validation was investigated. Lastly, advanced software fault-tolerance models were studied to provide alternatives and improvements in situations where simple software fault-tolerance strategies break down.
Statistical modeling of software reliability
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1992-01-01
This working paper discusses the statistical simulation part of a controlled software development experiment being conducted under the direction of the System Validation Methods Branch, Information Systems Division, NASA Langley Research Center. The experiment uses guidance and control software (GCS) aboard a fictitious planetary landing spacecraft: real-time control software operating on a transient mission. Software execution is simulated to study the statistical aspects of reliability and other failure characteristics of the software during development, testing, and random usage. Quantification of software reliability is a major goal. Various reliability concepts are discussed. Experiments are described for performing simulations and collecting appropriate simulated software performance and failure data. This data is then used to make statistical inferences about the quality of the software development and verification processes as well as inferences about the reliability of software versions and reliability growth under random testing and debugging.
Pérez-Guisado, Joaquín; de Haro-Padilla, Jesús M; Rioja, Luis F; DeRosier, Leo C; de la Torre, Jorge I
2013-01-01
Objective: Serum albumin levels have been used to evaluate the severity of the burns and the nutrition protein status in burn people, specifically in the response of the burn patient to the nutrition. Although it hasn’t been proven if all these associations are fully funded. The aim of this retrospective study was to determine the relationship of serum albumin levels at 3-7 days after the burn injury, with the total body surface area burned (TBSA), the length of hospital stay (LHS) and the initiation of the oral/enteral nutrition (IOEN). Subject and methods: It was carried out with the health records of patients that accomplished the inclusion criteria and were admitted to the burn units at the University Hospital of Reina Sofia (Córdoba, Spain) and UAB Hospital at Birmingham (Alabama, USA) over a 10 years period, between January 2000 and December 2009. We studied the statistical association of serum albumin levels with the TBSA, LHS and IOEN by ANOVA one way test. The confidence interval chosen for statistical differences was 95%. Duncan’s test was used to determine the number of statistically significantly groups. Results: Were expressed as mean±standard deviation. We found serum albumin levels association with TBSA and LHS, with greater to lesser serum albumin levels found associated to lesser to greater TBSA and LHS. We didn’t find statistical association with IOEN. Conclusion: We conclude that serum albumin levels aren’t a nutritional marker in burn people although they could be used as a simple clinical tool to identify the severity of the burn wounds represented by the total body surface area burned and the lenght of hospital stay. PMID:23875122
Pérez-Guisado, Joaquín; de Haro-Padilla, Jesús M; Rioja, Luis F; Derosier, Leo C; de la Torre, Jorge I
2013-01-01
Serum albumin levels have been used to evaluate the severity of the burns and the nutrition protein status in burn people, specifically in the response of the burn patient to the nutrition. Although it hasn't been proven if all these associations are fully funded. The aim of this retrospective study was to determine the relationship of serum albumin levels at 3-7 days after the burn injury, with the total body surface area burned (TBSA), the length of hospital stay (LHS) and the initiation of the oral/enteral nutrition (IOEN). It was carried out with the health records of patients that accomplished the inclusion criteria and were admitted to the burn units at the University Hospital of Reina Sofia (Córdoba, Spain) and UAB Hospital at Birmingham (Alabama, USA) over a 10 years period, between January 2000 and December 2009. We studied the statistical association of serum albumin levels with the TBSA, LHS and IOEN by ANOVA one way test. The confidence interval chosen for statistical differences was 95%. Duncan's test was used to determine the number of statistically significantly groups. Were expressed as mean±standard deviation. We found serum albumin levels association with TBSA and LHS, with greater to lesser serum albumin levels found associated to lesser to greater TBSA and LHS. We didn't find statistical association with IOEN. We conclude that serum albumin levels aren't a nutritional marker in burn people although they could be used as a simple clinical tool to identify the severity of the burn wounds represented by the total body surface area burned and the lenght of hospital stay.
NASA Data Acquisition System Software Development for Rocket Propulsion Test Facilities
NASA Technical Reports Server (NTRS)
Herbert, Phillip W., Sr.; Elliot, Alex C.; Graves, Andrew R.
2015-01-01
Current NASA propulsion test facilities include Stennis Space Center in Mississippi, Marshall Space Flight Center in Alabama, Plum Brook Station in Ohio, and White Sands Test Facility in New Mexico. Within and across these centers, a diverse set of data acquisition systems exist with different hardware and software platforms. The NASA Data Acquisition System (NDAS) is a software suite designed to operate and control many critical aspects of rocket engine testing. The software suite combines real-time data visualization, data recording to a variety formats, short-term and long-term acquisition system calibration capabilities, test stand configuration control, and a variety of data post-processing capabilities. Additionally, data stream conversion functions exist to translate test facility data streams to and from downstream systems, including engine customer systems. The primary design goals for NDAS are flexibility, extensibility, and modularity. Providing a common user interface for a variety of hardware platforms helps drive consistency and error reduction during testing. In addition, with an understanding that test facilities have different requirements and setups, the software is designed to be modular. One engine program may require real-time displays and data recording; others may require more complex data stream conversion, measurement filtering, or test stand configuration management. The NDAS suite allows test facilities to choose which components to use based on their specific needs. The NDAS code is primarily written in LabVIEW, a graphical, data-flow driven language. Although LabVIEW is a general-purpose programming language; large-scale software development in the language is relatively rare compared to more commonly used languages. The NDAS software suite also makes extensive use of a new, advanced development framework called the Actor Framework. The Actor Framework provides a level of code reuse and extensibility that has previously been difficult to achieve using LabVIEW. The
Rules of thumb to increase the software quality through testing
NASA Astrophysics Data System (ADS)
Buttu, M.; Bartolini, M.; Migoni, C.; Orlati, A.; Poppi, S.; Righini, S.
2016-07-01
The software maintenance typically requires 40-80% of the overall project costs, and this considerable variability mostly depends on the software internal quality: the more the software is designed and implemented to constantly welcome new changes, the lower will be the maintenance costs. The internal quality is typically enforced through testing, which in turn also affects the development and maintenance costs. This is the reason why testing methodologies have become a major concern for any company that builds - or is involved in building - software. Although there is no testing approach that suits all contexts, we infer some general guidelines learned during the Development of the Italian Single-dish COntrol System (DISCOS), which is a project aimed at producing the control software for the three INAF radio telescopes (the Medicina and Noto dishes, and the newly-built SRT). These guidelines concern both the development and the maintenance phases, and their ultimate goal is to maximize the DISCOS software quality through a Behavior-Driven Development (BDD) workflow beside a continuous delivery pipeline. We consider different topics and patterns; they involve the proper apportion of the tests (from end-to-end to low-level tests), the choice between hardware simulators and mockers, why and how to apply TDD and the dependency injection to increase the test coverage, the emerging technologies available for test isolation, bug fixing, how to protect the system from the external resources changes (firmware updating, hardware substitution, etc.) and, eventually, how to accomplish BDD starting from functional tests and going through integration and unit tests. We discuss pros and cons of each solution and point out the motivations of our choices either as a general rule or narrowed in the context of the DISCOS project.
ERIC Educational Resources Information Center
Scott, Elsje; Zadirov, Alexander; Feinberg, Sean; Jayakody, Ruwanga
2004-01-01
Software testing is a crucial component in the development of good quality systems in industry. For this reason it was considered important to investigate the extent to which the Information Systems (IS) syllabus at the University of Cape Town (UCT) was aligned with accepted software testing practices in South Africa. For students to be effective…
Acquisition Handbook - Update. Comprehensive Approach to Reusable Defensive Software (CARDS)
1994-03-25
designs, and implementation components (source code, test plans, procedures and results, and system/software documentation). This handbook provides a...activities where software components are acquired, evaluated, tested and sometimes modified. In addition to serving as a facility for the acquisition and...systems from such components [1]. Implementation components are at the lowest level and consist of: specifications; detailed designs; code, test
Improving Building Energy Simulation Programs Through Diagnostic Testing (Fact Sheet)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2012-02-01
New test procedure evaluates quality and accuracy of energy analysis tools for the residential building retrofit market. Reducing the energy use of existing homes in the United States offers significant energy-saving opportunities, which can be identified through building simulation software tools that calculate optimal packages of efficiency measures. To improve the accuracy of energy analysis for residential buildings, the National Renewable Energy Laboratory's (NREL) Buildings Research team developed the Building Energy Simulation Test for Existing Homes (BESTEST-EX), a method for diagnosing and correcting errors in building energy audit software and calibration procedures. BESTEST-EX consists of building physics and utility billmore » calibration test cases, which software developers can use to compare their tools simulation findings to reference results generated with state-of-the-art simulation tools. Overall, the BESTEST-EX methodology: (1) Tests software predictions of retrofit energy savings in existing homes; (2) Ensures building physics calculations and utility bill calibration procedures perform to a minimum standard; and (3) Quantifies impacts of uncertainties in input audit data and occupant behavior. BESTEST-EX is helping software developers identify and correct bugs in their software, as well as develop and test utility bill calibration procedures.« less
Grasping objects autonomously in simulated KC-135 zero-g
NASA Technical Reports Server (NTRS)
Norsworthy, Robert S.
1994-01-01
The KC-135 aircraft was chosen for simulated zero gravity testing of the Extravehicular Activity Helper/retriever (EVAHR). A software simulation of the EVAHR hardware, KC-135 flight dynamics, collision detection and grasp inpact dynamics has been developed to integrate and test the EVAHR software prior to flight testing on the KC-135. The EVAHR software will perform target pose estimation, tracking, and motion estimation for rigid, freely rotating, polyhedral objects. Manipulator grasp planning and trajectory control software has also been developed to grasp targets while avoiding collisions.
1982-03-01
pilot systems. Magnitude of the mutant error is classified as: o Program does not compute. o Program computes but does not run test data. o Program...14 Test and Integration ... ............ .. 105 15 The Mapping of SQM to the SDLC ........ ... 108 16 ADS Development .... .............. . 224 17...and funds. While the test phase concludes the normal development cycle, one should realize that with software the development continues in the
Adaptive Integration of Nonsmooth Dynamical Systems
2017-10-11
controlled time stepping method to interactively design running robots. [1] John Shepherd, Samuel Zapolsky, and Evan M. Drumwright, “Fast multi-body...software like this to test software running on my robots. Started working in simulation after attempting to use software like this to test software... running on my robots. The libraries that produce these beautiful results have failed at simulating robotic manipulation. Postulate: It is easier to
2006-12-01
NAVIGATION SOFTWARE ARCHITECTURE DESIGN FOR THE AUTONOMOUS MULTI-AGENT PHYSICALLY INTERACTING SPACECRAFT (AMPHIS) TEST BED by Blake D. Eikenberry...Engineer Degree 4. TITLE AND SUBTITLE Guidance and Navigation Software Architecture Design for the Autonomous Multi- Agent Physically Interacting...iii Approved for public release; distribution is unlimited GUIDANCE AND NAVIGATION SOFTWARE ARCHITECTURE DESIGN FOR THE AUTONOMOUS MULTI
Model-based software process improvement
NASA Technical Reports Server (NTRS)
Zettervall, Brenda T.
1994-01-01
The activities of a field test site for the Software Engineering Institute's software process definition project are discussed. Products tested included the improvement model itself, descriptive modeling techniques, the CMM level 2 framework document, and the use of process definition guidelines and templates. The software process improvement model represents a five stage cyclic approach for organizational process improvement. The cycles consist of the initiating, diagnosing, establishing, acting, and leveraging phases.
NASA Technical Reports Server (NTRS)
Al-Hamdan, Mohammad; Crosson, William; Economou, Sigrid; Estes,Maurice, Jr.; Estes, Sue; Hemmings, Sarah; Kent, Shia; Puckett, Mark; Quattrochi, Dale; Wade, Gina;
2012-01-01
The overall goal of this study is to address issues of environmental health and enhance public health decision making by using NASA remotely sensed data and products. This study is a collaboration between NASA Marshall Space Flight Center, Universities Space Research Association (USRA), the University of Alabama at Birmingham (UAB) School of Public Health and the Centers for Disease Control and Prevention (CDC) Office of Surveillance, Epidemiology and Laboratory Services. The objectives of this study are to develop high-quality spatial data sets of environmental variables, link these with public health data from a national cohort study, and deliver the environmental data sets and associated public health analyses to local, state and federal end ]user groups. Three daily environmental data sets were developed for the conterminous U.S. on different spatial resolutions for the period 2003-2008: (1) spatial surfaces of estimated fine particulate matter (PM2.5) on a 10-km grid using US Environmental Protection Agency (EPA) ground observations and NASA's MODerate-resolution Imaging Spectroradiometer (MODIS) data; (2) a 1-km grid of MODIS Land Surface Temperature (LST); and (3) a 12-km grid of daily incoming solar radiation and maximum and minimum air temperature using the North American Land Data Assimilation System (NLDAS) data. These environmental datasets were linked with public health data from the UAB REasons for Geographic and Racial Differences in Stroke (REGARDS) national cohort study to determine whether exposures to these environmental risk factors are related to cognitive decline, stroke and other health outcomes. These environmental national datasets will also be made available to public health professionals, researchers and the general public via the CDC Wide-ranging Online Data for Epidemiologic Research (WONDER) system, where they can be aggregated to the county-level, state-level, or regional-level as per users f need and downloaded in tabular, graphical, and map formats. This provides a significant addition to the CDC WONDER online system, allowing public health researchers and policy makers to better include environmental exposure data in the context of other health data available in CDC WONDER. It also substantially expands public access to NASA data, making their use by a wide range of decisionmakers feasible.
NASA Astrophysics Data System (ADS)
Al-Hamdan, M. Z.; Crosson, W. L.; Economou, S.; Estes, M., Jr.; Estes, S. M.; Hemmings, S. N.; Kent, S.; Loop, M.; Puckett, M.; Quattrochi, D. A.; Wade, G.; McClure, L.
2012-12-01
The overall goal of this study is to address issues of environmental health and enhance public health decision making by using NASA remotely sensed data and products. This study is a collaboration between NASA Marshall Space Flight Center, Universities Space Research Association (USRA), the University of Alabama at Birmingham (UAB) School of Public Health and the Centers for Disease Control and Prevention (CDC) Office of Surveillance, Epidemiology and Laboratory Services. The objectives of this study are to develop high-quality spatial data sets of environmental variables, link these with public health data from a national cohort study, and deliver the environmental data sets and associated public health analyses to local, state and federal end-user groups. Three daily environmental data sets were developed for the conterminous U.S. on different spatial resolutions for the period 2003-2008: (1) spatial surfaces of estimated fine particulate matter (PM2.5) on a 10-km grid using US Environmental Protection Agency (EPA) ground observations and NASA's MODerate-resolution Imaging Spectroradiometer (MODIS) data; (2) a 1-km grid of MODIS Land Surface Temperature (LST); and (3) a 12-km grid of daily incoming solar radiation and maximum and minimum air temperature using the North American Land Data Assimilation System (NLDAS) data. These environmental datasets were linked with public health data from the UAB REasons for Geographic and Racial Differences in Stroke (REGARDS) national cohort study to determine whether exposures to these environmental risk factors are related to cognitive decline, stroke and other health outcomes. These environmental national datasets will also be made available to public health professionals, researchers and the general public via the CDC Wide-ranging Online Data for Epidemiologic Research (WONDER) system, where they can be aggregated to the county-level, state-level, or regional-level as per users' need and downloaded in tabular, graphical, and map formats. This provides a significant addition to the CDC WONDER online system, allowing public health researchers and policy makers to better include environmental exposure data in the context of other health data available in CDC WONDER. It also substantially expands public access to NASA data, making their use by a wide range of decision-makers feasible.
NASA Technical Reports Server (NTRS)
Al-Hamdan, Mohammad; Crosson, William; Economou, Sigrid; Estes, Maurice, Jr.; Estes, Sue; Hemmings, Sarah; Kent, Shia; Puckett, Mark; Quattrochi, Dale; Wade, Gina;
2012-01-01
The overall goal of this study is to address issues of environmental health and enhance public health decision making by utilizing NASA remotely-sensed data and products. This study is a collaboration between NASA Marshall Space Flight Center, Universities Space Research Association (USRA), the University of Alabama at Birmingham (UAB) School of Public Health and the Centers for Disease Control and Prevention (CDC) National Center for Public Health Informatics. The objectives of this study are to develop high-quality spatial data sets of environmental variables, link these with public health data from a national cohort study, and deliver the linked data sets and associated analyses to local, state and federal end-user groups. Three daily environmental data sets were developed for the conterminous U.S. on different spatial resolutions for the period 2003-2008: (1) spatial surfaces of estimated fine particulate matter (PM2.5) exposures on a 10-km grid utilizing the US Environmental Protection Agency (EPA) ground observations and NASA s MODerate-resolution Imaging Spectroradiometer (MODIS) data; (2) a 1-km grid of Land Surface Temperature (LST) using MODIS data; and (3) a 12-km grid of daily Solar Insolation (SI) and maximum and minimum air temperature using the North American Land Data Assimilation System (NLDAS) forcing data. These environmental datasets were linked with public health data from the UAB REasons for Geographic and Racial Differences in Stroke (REGARDS) national cohort study to determine whether exposures to these environmental risk factors are related to cognitive decline and other health outcomes. These environmental national datasets will also be made available to public health professionals, researchers and the general public via the CDC Wide-ranging Online Data for Epidemiologic Research (WONDER) system, where they can be aggregated to the county, state or regional level as per users need and downloaded in tabular, graphical, and map formats. The linkage of these data provides a useful addition to CDC WONDER, allowing public health researchers and policy makers to better include environmental exposure data in the context of other health data available in this online system. It also substantially expands public access to NASA data, making their use by a wide range of decision makers feasible.
Ultra-deep Large Binocular Camera U-band Imaging of the GOODS-North Field: Depth Versus Resolution
NASA Astrophysics Data System (ADS)
Ashcraft, Teresa A.; Windhorst, Rogier A.; Jansen, Rolf A.; Cohen, Seth H.; Grazian, Andrea; Paris, Diego; Fontana, Adriano; Giallongo, Emanuele; Speziali, Roberto; Testa, Vincenzo; Boutsia, Konstantina; O’Connell, Robert W.; Rutkowski, Michael J.; Ryan, Russell E.; Scarlata, Claudia; Weiner, Benjamin
2018-06-01
We present a study of the trade-off between depth and resolution using a large number of U-band imaging observations in the GOODS-North field from the Large Binocular Camera (LBC) on the Large Binocular Telescope (LBT). Having acquired over 30 hr of data (315 images with 5–6 minutes exposures), we generated multiple image mosaics, starting with the best atmospheric seeing images (FWHM ≲ 0.″8), which constitute ∼10% of the total data set. For subsequent mosaics, we added in data with larger seeing values until the final, deepest mosaic included all images with FWHM ≲ 1.″8 (∼94% of the total data set). From the mosaics, we made object catalogs to compare the optimal-resolution, yet shallower image to the lower-resolution but deeper image. We show that the number counts for both images are ∼90% complete to U AB ≲ 26 mag. Fainter than U AB ∼ 27 mag, the object counts from the optimal-resolution image start to drop-off dramatically (90% between U AB = 27 and 28 mag), while the deepest image with better surface-brightness sensitivity ({μ }U{AB} ≲ 32 mag arcsec‑2) show a more gradual drop (10% between U AB ≃ 27 and 28 mag). For the brightest galaxies within the GOODS-N field, structure and clumpy features within the galaxies are more prominent in the optimal-resolution image compared to the deeper mosaics. We conclude that for studies of brighter galaxies and features within them, the optimal-resolution image should be used. However, to fully explore and understand the faintest objects, the deeper imaging with lower resolution are also required. Finally, we find—for 220 brighter galaxies with U AB ≲ 23 mag—only marginal differences in total flux between the optimal-resolution and lower-resolution light-profiles to {μ }U{AB} ≲ 32 mag arcsec‑2. In only 10% of the cases are the total-flux differences larger than 0.5 mag. This helps constrain how much flux can be missed from galaxy outskirts, which is important for studies of the Extragalactic Background Light. Based on data acquired using the Large Binocular Telescope (LBT).
Software for Automated Testing of Mission-Control Displays
NASA Technical Reports Server (NTRS)
OHagan, Brian
2004-01-01
MCC Display Cert Tool is a set of software tools for automated testing of computerterminal displays in spacecraft mission-control centers, including those of the space shuttle and the International Space Station. This software makes it possible to perform tests that are more thorough, take less time, and are less likely to lead to erroneous results, relative to tests performed manually. This software enables comparison of two sets of displays to report command and telemetry differences, generates test scripts for verifying telemetry and commands, and generates a documentary record containing display information, including version and corrective-maintenance data. At the time of reporting the information for this article, work was continuing to add a capability for validation of display parameters against a reconfiguration file.
NASA Data Acquisitions System (NDAS) Software Architecture
NASA Technical Reports Server (NTRS)
Davis, Dawn; Duncan, Michael; Franzl, Richard; Holladay, Wendy; Marshall, Peggi; Morris, Jon; Turowski, Mark
2012-01-01
The NDAS Software Project is for the development of common low speed data acquisition system software to support NASA's rocket propulsion testing facilities at John C. Stennis Space Center (SSC), White Sands Test Facility (WSTF), Plum Brook Station (PBS), and Marshall Space Flight Center (MSFC).
An experience of qualified preventive screening: shiraz smart screening software.
Islami Parkoohi, Parisa; Zare, Hashem; Abdollahifard, Gholamreza
2015-01-01
Computerized preventive screening software is a cost effective intervention tool to address non-communicable chronic diseases. Shiraz Smart Screening Software (SSSS) was developed as an innovative tool for qualified screening. It allows simultaneous smart screening of several high-burden chronic diseases and supports reminder notification functionality. The extent in which SSSS affects screening quality is also described. Following software development, preventive screening and annual health examinations of 261 school staff (Medical School of Shiraz, Iran) was carried out in a software-assisted manner. To evaluate the quality of the software-assisted screening, we used quasi-experimental study design and determined coverage, irregular attendance and inappropriateness proportions in relation with the manual and software-assisted screening as well as the corresponding number of requested tests. In manual screening method, 27% of employees were covered (with 94% irregular attendance) while by software-assisted screening, the coverage proportion was 79% (attendance status will clear after the specified time). The frequency of inappropriate screening test requests, before the software implementation, was 41.37% for fasting plasma glucose, 41.37% for lipid profile, 0.84% for occult blood, 0.19% for flexible sigmoidoscopy/colonoscopy, 35.29% for Pap smear, 19.20% for mammography and 11.2% for prostate specific antigen. All of the above were corrected by the software application. In total, 366 manual screening and 334 software-assisted screening tests were requested. SSSS is an innovative tool to improve the quality of preventive screening plans in terms of increased screening coverage, reduction in inappropriateness and the total number of requested tests.
LV software support for supersonic flow analysis
NASA Technical Reports Server (NTRS)
Bell, W. A.; Lepicovsky, J.
1992-01-01
The software for configuring an LV counter processor system has been developed using structured design. The LV system includes up to three counter processors and a rotary encoder. The software for configuring and testing the LV system has been developed, tested, and included in an overall software package for data acquisition, analysis, and reduction. Error handling routines respond to both operator and instrument errors which often arise in the course of measuring complex, high-speed flows. The use of networking capabilities greatly facilitates the software development process by allowing software development and testing from a remote site. In addition, high-speed transfers allow graphics files or commands to provide viewing of the data from a remote site. Further advances in data analysis require corresponding advances in procedures for statistical and time series analysis of nonuniformly sampled data.
LV software support for supersonic flow analysis
NASA Technical Reports Server (NTRS)
Bell, William A.
1992-01-01
The software for configuring a Laser Velocimeter (LV) counter processor system was developed using structured design. The LV system includes up to three counter processors and a rotary encoder. The software for configuring and testing the LV system was developed, tested, and included in an overall software package for data acquisition, analysis, and reduction. Error handling routines respond to both operator and instrument errors which often arise in the course of measuring complex, high-speed flows. The use of networking capabilities greatly facilitates the software development process by allowing software development and testing from a remote site. In addition, high-speed transfers allow graphics files or commands to provide viewing of the data from a remote site. Further advances in data analysis require corresponding advances in procedures for statistical and time series analysis of nonuniformly sampled data.
Ffuzz: Towards full system high coverage fuzz testing on binary executables.
Zhang, Bin; Ye, Jiaxi; Bi, Xing; Feng, Chao; Tang, Chaojing
2018-01-01
Bugs and vulnerabilities in binary executables threaten cyber security. Current discovery methods, like fuzz testing, symbolic execution and manual analysis, both have advantages and disadvantages when exercising the deeper code area in binary executables to find more bugs. In this paper, we designed and implemented a hybrid automatic bug finding tool-Ffuzz-on top of fuzz testing and selective symbolic execution. It targets full system software stack testing including both the user space and kernel space. Combining these two mainstream techniques enables us to achieve higher coverage and avoid getting stuck both in fuzz testing and symbolic execution. We also proposed two key optimizations to improve the efficiency of full system testing. We evaluated the efficiency and effectiveness of our method on real-world binary software and 844 memory corruption vulnerable programs in the Juliet test suite. The results show that Ffuzz can discover software bugs in the full system software stack effectively and efficiently.
Simulation test beds for the space station electrical power system
NASA Technical Reports Server (NTRS)
Sadler, Gerald G.
1988-01-01
NASA Lewis Research Center and its prime contractor are responsible for developing the electrical power system on the space station. The power system will be controlled by a network of distributed processors. Control software will be verified, validated, and tested in hardware and software test beds. Current plans for the software test bed involve using real time and nonreal time simulations of the power system. This paper will discuss the general simulation objectives and configurations, control architecture, interfaces between simulator and controls, types of tests, and facility configurations.
System integration test plan for HANDI 2000 business management system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, D.
This document presents the system integration test plan for the Commercial-Off-The-Shelf, PassPort and PeopleSoft software, and custom software created to work with the COTS products. The PP software is an integrated application for AP, Contract Management, Inventory Management, Purchasing and Material Safety Data Sheet. The PS software is an integrated application for Project Costing, General Ledger, Human Resources/Training, Payroll, and Base Benefits.
Instrument control software development process for the multi-star AO system ARGOS
NASA Astrophysics Data System (ADS)
Kulas, M.; Barl, L.; Borelli, J. L.; Gässler, W.; Rabien, S.
2012-09-01
The ARGOS project (Advanced Rayleigh guided Ground layer adaptive Optics System) will upgrade the Large Binocular Telescope (LBT) with an AO System consisting of six Rayleigh laser guide stars. This adaptive optics system integrates several control loops and many different components like lasers, calibration swing arms and slope computers that are dispersed throughout the telescope. The purpose of the instrument control software (ICS) is running this AO system and providing convenient client interfaces to the instruments and the control loops. The challenges for the ARGOS ICS are the development of a distributed and safety-critical software system with no defects in a short time, the creation of huge and complex software programs with a maintainable code base, the delivery of software components with the desired functionality and the support of geographically distributed project partners. To tackle these difficult tasks, the ARGOS software engineers reuse existing software like the novel middleware from LINC-NIRVANA, an instrument for the LBT, provide many tests at different functional levels like unit tests and regression tests, agree about code and architecture style and deliver software incrementally while closely collaborating with the project partners. Many ARGOS ICS components are already successfully in use in the laboratories for testing ARGOS control loops.
Exploring the Use of a Test Automation Framework
NASA Technical Reports Server (NTRS)
Cervantes, Alex
2009-01-01
It is known that software testers, more often than not, lack the time needed to fully test the delivered software product within the time period allotted to them. When problems in the implementation phase of a development project occur, it normally causes the software delivery date to slide. As a result, testers either need to work longer hours, or supplementary resources need to be added to the test team in order to meet aggressive test deadlines. One solution to this problem is to provide testers with a test automation framework to facilitate the development of automated test solutions.
Behavior driven testing in ALMA telescope calibration software
NASA Astrophysics Data System (ADS)
Gil, Juan P.; Garces, Mario; Broguiere, Dominique; Shen, Tzu-Chiang
2016-07-01
ALMA software development cycle includes well defined testing stages that involves developers, testers and scientists. We adapted Behavior Driven Development (BDD) to testing activities applied to Telescope Calibration (TELCAL) software. BDD is an agile technique that encourages communication between roles by defining test cases using natural language to specify features and scenarios, what allows participants to share a common language and provides a high level set of automated tests. This work describes how we implemented and maintain BDD testing for TELCAL, the infrastructure needed to support it and proposals to expand this technique to other subsystems.
Real-Time Extended Interface Automata for Software Testing Cases Generation
Yang, Shunkun; Xu, Jiaqi; Man, Tianlong; Liu, Bin
2014-01-01
Testing and verification of the interface between software components are particularly important due to the large number of complex interactions, which requires the traditional modeling languages to overcome the existing shortcomings in the aspects of temporal information description and software testing input controlling. This paper presents the real-time extended interface automata (RTEIA) which adds clearer and more detailed temporal information description by the application of time words. We also establish the input interface automaton for every input in order to solve the problems of input controlling and interface covering nimbly when applied in the software testing field. Detailed definitions of the RTEIA and the testing cases generation algorithm are provided in this paper. The feasibility and efficiency of this method have been verified in the testing of one real aircraft braking system. PMID:24892080
NASA Technical Reports Server (NTRS)
Nickum, J. D.
1978-01-01
The software package developed for the KIM-1 Micro-System and the Mini-L PLL receiver to simplify taking flight test data is described along with the address and data bus buffers used in the KIM-1 Micro-system. The interface hardware and timing are also presented to describe completely the software programs.
Design ATE systems for complex assemblies
NASA Astrophysics Data System (ADS)
Napier, R. S.; Flammer, G. H.; Moser, S. A.
1983-06-01
The use of ATE systems in radio specification testing can reduce the test time by approximately 90 to 95 percent. What is more, the test station does not require a highly trained operator. Since the system controller has full power over all the measurements, human errors are not introduced into the readings. The controller is immune to any need to increase output by allowing marginal units to pass through the system. In addition, the software compensates for predictable, repeatable system errors, for example, cabling losses, which are an inherent part of the test setup. With no variation in test procedures from unit to unit, there is a constant repeatability factor. Preparing the software, however, usually entails considerable expense. It is pointed out that many of the problems associated with ATE system software can be avoided with the use of a software-intensive, or computer-intensive, system organization. Its goal is to minimize the user's need for software development, thereby saving time and money.
Software platform virtualization in chemistry research and university teaching
2009-01-01
Background Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Results Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Conclusion Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide. PMID:20150997
Software platform virtualization in chemistry research and university teaching.
Kind, Tobias; Leamy, Tim; Leary, Julie A; Fiehn, Oliver
2009-11-16
Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide.
Simulation-based Testing of Control Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozmen, Ozgur; Nutaro, James J.; Sanyal, Jibonananda
It is impossible to adequately test complex software by examining its operation in a physical prototype of the system monitored. Adequate test coverage can require millions of test cases, and the cost of equipment prototypes combined with the real-time constraints of testing with them makes it infeasible to sample more than a small number of these tests. Model based testing seeks to avoid this problem by allowing for large numbers of relatively inexpensive virtual prototypes that operate in simulation time at a speed limited only by the available computing resources. In this report, we describe how a computer system emulatormore » can be used as part of a model based testing environment; specifically, we show that a complete software stack including operating system and application software - can be deployed within a simulated environment, and that these simulations can proceed as fast as possible. To illustrate this approach to model based testing, we describe how it is being used to test several building control systems that act to coordinate air conditioning loads for the purpose of reducing peak demand. These tests involve the use of ADEVS (A Discrete Event System Simulator) and QEMU (Quick Emulator) to host the operational software within the simulation, and a building model developed with the MODELICA programming language using Buildings Library and packaged as an FMU (Functional Mock-up Unit) that serves as the virtual test environment.« less
Waveform Generator Signal Processing Software
DOT National Transportation Integrated Search
1988-09-01
This report describes the software that was developed to process test waveforms that were recorded by crash test data acquisition systems. The test waveforms are generated by an electronic waveform generator developed by MGA Research Corporation unde...
Software error data collection and categorization
NASA Technical Reports Server (NTRS)
Ostrand, T. J.; Weyuker, E. J.
1982-01-01
Software errors detected during development of an interactive special purpose editor system were studied. This product was followed during nine months of coding, unit testing, function testing, and system testing. A new error categorization scheme was developed.
Foo Kune, Denis [Saint Paul, MN; Mahadevan, Karthikeyan [Mountain View, CA
2011-01-25
A recursive verification protocol to reduce the time variance due to delays in the network by putting the subject node at most one hop from the verifier node provides for an efficient manner to test wireless sensor nodes. Since the software signatures are time based, recursive testing will give a much cleaner signal for positive verification of the software running on any one node in the sensor network. In this protocol, the main verifier checks its neighbor, who in turn checks its neighbor, and continuing this process until all nodes have been verified. This ensures minimum time delays for the software verification. Should a node fail the test, the software verification downstream is halted until an alternative path (one not including the failed node) is found. Utilizing techniques well known in the art, having a node tested twice, or not at all, can be avoided.
A Model Independent S/W Framework for Search-Based Software Testing
Baik, Jongmoon
2014-01-01
In Model-Based Testing (MBT) area, Search-Based Software Testing (SBST) has been employed to generate test cases from the model of a system under test. However, many types of models have been used in MBT. If the type of a model has changed from one to another, all functions of a search technique must be reimplemented because the types of models are different even if the same search technique has been applied. It requires too much time and effort to implement the same algorithm over and over again. We propose a model-independent software framework for SBST, which can reduce redundant works. The framework provides a reusable common software platform to reduce time and effort. The software framework not only presents design patterns to find test cases for a target model but also reduces development time by using common functions provided in the framework. We show the effectiveness and efficiency of the proposed framework with two case studies. The framework improves the productivity by about 50% when changing the type of a model. PMID:25302314
Smith, M; Murphy, D; Laxmisan, A; Sittig, D; Reis, B; Esquivel, A; Singh, H
2013-01-01
Abnormal test results do not always receive timely follow-up, even when providers are notified through electronic health record (EHR)-based alerts. High workload, alert fatigue, and other demands on attention disrupt a provider's prospective memory for tasks required to initiate follow-up. Thus, EHR-based tracking and reminding functionalities are needed to improve follow-up. The purpose of this study was to develop a decision-support software prototype enabling individual and system-wide tracking of abnormal test result alerts lacking follow-up, and to conduct formative evaluations, including usability testing. We developed a working prototype software system, the Alert Watch And Response Engine (AWARE), to detect abnormal test result alerts lacking documented follow-up, and to present context-specific reminders to providers. Development and testing took place within the VA's EHR and focused on four cancer-related abnormal test results. Design concepts emphasized mitigating the effects of high workload and alert fatigue while being minimally intrusive. We conducted a multifaceted formative evaluation of the software, addressing fit within the larger socio-technical system. Evaluations included usability testing with the prototype and interview questions about organizational and workflow factors. Participants included 23 physicians, 9 clinical information technology specialists, and 8 quality/safety managers. Evaluation results indicated that our software prototype fit within the technical environment and clinical workflow, and physicians were able to use it successfully. Quality/safety managers reported that the tool would be useful in future quality assurance activities to detect patients who lack documented follow-up. Additionally, we successfully installed the software on the local facility's "test" EHR system, thus demonstrating technical compatibility. To address the factors involved in missed test results, we developed a software prototype to account for technical, usability, organizational, and workflow needs. Our evaluation has shown the feasibility of the prototype as a means of facilitating better follow-up for cancer-related abnormal test results.
Test Driven Development: Lessons from a Simple Scientific Model
NASA Astrophysics Data System (ADS)
Clune, T. L.; Kuo, K.
2010-12-01
In the commercial software industry, unit testing frameworks have emerged as a disruptive technology that has permanently altered the process by which software is developed. Unit testing frameworks significantly reduce traditional barriers, both practical and psychological, to creating and executing tests that verify software implementations. A new development paradigm, known as test driven development (TDD), has emerged from unit testing practices, in which low-level tests (i.e. unit tests) are created by developers prior to implementing new pieces of code. Although somewhat counter-intuitive, this approach actually improves developer productivity. In addition to reducing the average time for detecting software defects (bugs), the requirement to provide procedure interfaces that enable testing frequently leads to superior design decisions. Although TDD is widely accepted in many software domains, its applicability to scientific modeling still warrants reasonable skepticism. While the technique is clearly relevant for infrastructure layers of scientific models such as the Earth System Modeling Framework (ESMF), numerical and scientific components pose a number of challenges to TDD that are not often encountered in commercial software. Nonetheless, our experience leads us to believe that the technique has great potential not only for developer productivity, but also as a tool for understanding and documenting the basic scientific assumptions upon which our models are implemented. We will provide a brief introduction to test driven development and then discuss our experience in using TDD to implement a relatively simple numerical model that simulates the growth of snowflakes. Many of the lessons learned are directly applicable to larger scientific models.
The Infeasibility of Quantifying the Reliability of Life-Critical Real-Time Software
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Finelli, George B.
1991-01-01
This paper affirms that the quantification of life-critical software reliability is infeasible using statistical methods whether applied to standard software or fault-tolerant software. The classical methods of estimating reliability are shown to lead to exhorbitant amounts of testing when applied to life-critical software. Reliability growth models are examined and also shown to be incapable of overcoming the need for excessive amounts of testing. The key assumption of software fault tolerance separately programmed versions fail independently is shown to be problematic. This assumption cannot be justified by experimentation in the ultrareliability region and subjective arguments in its favor are not sufficiently strong to justify it as an axiom. Also, the implications of the recent multiversion software experiments support this affirmation.
Software verification plan for GCS. [guidance and control software
NASA Technical Reports Server (NTRS)
Dent, Leslie A.; Shagnea, Anita M.; Hayhurst, Kelly J.
1990-01-01
This verification plan is written as part of an experiment designed to study the fundamental characteristics of the software failure process. The experiment will be conducted using several implementations of software that were produced according to industry-standard guidelines, namely the Radio Technical Commission for Aeronautics RTCA/DO-178A guidelines, Software Consideration in Airborne Systems and Equipment Certification, for the development of flight software. This plan fulfills the DO-178A requirements for providing instructions on the testing of each implementation of software. The plan details the verification activities to be performed at each phase in the development process, contains a step by step description of the testing procedures, and discusses all of the tools used throughout the verification process.
Software development predictors, error analysis, reliability models and software metric analysis
NASA Technical Reports Server (NTRS)
Basili, Victor
1983-01-01
The use of dynamic characteristics as predictors for software development was studied. It was found that there are some significant factors that could be useful as predictors. From a study on software errors and complexity, it was shown that meaningful results can be obtained which allow insight into software traits and the environment in which it is developed. Reliability models were studied. The research included the field of program testing because the validity of some reliability models depends on the answers to some unanswered questions about testing. In studying software metrics, data collected from seven software engineering laboratory (FORTRAN) projects were examined and three effort reporting accuracy checks were applied to demonstrate the need to validate a data base. Results are discussed.
[Skin graft, smoking and diabetes mellitus type 2].
Pérez-Guisado, Joaquín; Fidalgo-Rodríguez, Félix T; Gaston, Kate L; Rioja, Luis F; Thomas, Steven J
2012-01-01
Smoking and hyperglycemia decrease the success of skin graft survival in specific circumstances. It is well known that smoking and diabetes mellitus (DM) type 2 increase the oxidative and impair the endothelial function. The objective of this retrospective study was to determine if smoking and DM type 2 are factors associated with lower skin graft survival, in different etiologies of the injury associated to the skin loss. It was a bicentric, retrospective, cross sectional case control study, carried out on 2457 medical patients who met the inclusion criteria. It was carried out over a 10 years period between January 2000-December 2009, at Reina Sofía University Hospital (Córdoba, Spain) and UAB Hospital at Birmingham (Alabama, USA). The percentage of successful graft for each group and its control were analyzed by Chi-square test. The confidence interval chosen for statistical differences was 95%. Smoking and DM type 2 decreased the percentage of skin graft survival when compared with their control groups. DM type 2 was associated with greater negative success on skin graft survival than smoking when compared with their control groups. There was a statistically significant drop in skin graft of 18% in smoking group (range: 68-86%) and 25% in DM type 2 group (53-78%). The OR showed a clear association between the risk factors studied and the lower skin graft success, being stronger for DM type 2. In conclusion, DM type 2 and smoking are factors associated to lower skin graft take.
CrossTalk. The Journal of Defense Software Engineering. Volume 13, Number 6, June 2000
2000-06-01
Techniques for Efficiently Generating and Testing Software This paper presents a proven process that uses advanced tools to design, develop and test... optimal software. by Keith R. Wegner Large Software Systems—Back to Basics Development methods that work on small problems seem to not scale well to...Ability Requirements for Teamwork: Implications for Human Resource Management, Journal of Management, Vol. 20, No. 2, 1994. 11. Ferguson, Pat, Watts S
Open source IPSEC software in manned and unmanned space missions
NASA Astrophysics Data System (ADS)
Edwards, Jacob
Network security is a major topic of research because cyber attackers pose a threat to national security. Securing ground-space communications for NASA missions is important because attackers could endanger mission success and human lives. This thesis describes how an open source IPsec software package was used to create a secure and reliable channel for ground-space communications. A cost efficient, reproducible hardware testbed was also created to simulate ground-space communications. The testbed enables simulation of low-bandwidth and high latency communications links to experiment how the open source IPsec software reacts to these network constraints. Test cases were built that allowed for validation of the testbed and the open source IPsec software. The test cases also simulate using an IPsec connection from mission control ground routers to points of interest in outer space. Tested open source IPsec software did not meet all the requirements. Software changes were suggested to meet requirements.
The use of emulator-based simulators for on-board software maintenance
NASA Astrophysics Data System (ADS)
Irvine, M. M.; Dartnell, A.
2002-07-01
Traditionally, onboard software maintenance activities within the space sector are performed using hardware-based facilities. These facilities are developed around the use of hardware emulation or breadboards containing target processors. Some sort of environment is provided around the hardware to support the maintenance actives. However, these environments are not easy to use to set-up the required test scenarios, particularly when the onboard software executes in a dynamic I/O environment, e.g. attitude control software, or data handling software. In addition, the hardware and/or environment may not support the test set-up required during investigations into software anomalies, e.g. raise spurious interrupt, fail memory, etc, and the overall "visibility" of the software executing may be limited. The Software Maintenance Simulator (SOMSIM) is a tool that can support the traditional maintenance facilities. The following list contains some of the main benefits that SOMSIM can provide: Low cost flexible extension to existing product - operational simulator containing software processor emulator; System-level high-fidelity test-bed in which software "executes"; Provides a high degree of control/configuration over the entire "system", including contingency conditions perhaps not possible with real hardware; High visibility and control over execution of emulated software. This paper describes the SOMSIM concept in more detail, and also describes the SOMSIM study being carried out for ESA/ESOC by VEGA IT GmbH.
Modular, Autonomous Command and Data Handling Software with Built-In Simulation and Test
NASA Technical Reports Server (NTRS)
Cuseo, John
2012-01-01
The spacecraft system that plays the greatest role throughout the program lifecycle is the Command and Data Handling System (C&DH), along with the associated algorithms and software. The C&DH takes on this role as cost driver because it is the brains of the spacecraft and is the element of the system that is primarily responsible for the integration and interoperability of all spacecraft subsystems. During design and development, many activities associated with mission design, system engineering, and subsystem development result in products that are directly supported by the C&DH, such as interfaces, algorithms, flight software (FSW), and parameter sets. A modular system architecture has been developed that provides a means for rapid spacecraft assembly, test, and integration. This modular C&DH software architecture, which can be targeted and adapted to a wide variety of spacecraft architectures, payloads, and mission requirements, eliminates the current practice of rewriting the spacecraft software and test environment for every mission. This software allows missionspecific software and algorithms to be rapidly integrated and tested, significantly decreasing time involved in the software development cycle. Additionally, the FSW includes an Onboard Dynamic Simulation System (ODySSy) that allows the C&DH software to support rapid integration and test. With this solution, the C&DH software capabilities will encompass all phases of the spacecraft lifecycle. ODySSy is an on-board simulation capability built directly into the FSW that provides dynamic built-in test capabilities as soon as the FSW image is loaded onto the processor. It includes a six-degrees- of-freedom, high-fidelity simulation that allows complete closed-loop and hardware-in-the-loop testing of a spacecraft in a ground processing environment without any additional external stimuli. ODySSy can intercept and modify sensor inputs using mathematical sensor models, and can intercept and respond to actuator commands. ODySSy integration is unique in that it allows testing of actual mission sequences on the flight vehicle while the spacecraft is in various stages of assembly, test, and launch operations all without any external support equipment or simulators. The ODySSy component of the FSW significantly decreases the time required for integration and test by providing an automated, standardized, and modular approach to integrated avionics and component interface and functional verification. ODySSy further provides the capability for on-orbit support in the form of autonomous mission planning and fault protection.
Software engineering and automatic continuous verification of scientific software
NASA Astrophysics Data System (ADS)
Piggott, M. D.; Hill, J.; Farrell, P. E.; Kramer, S. C.; Wilson, C. R.; Ham, D.; Gorman, G. J.; Bond, T.
2011-12-01
Software engineering of scientific code is challenging for a number of reasons including pressure to publish and a lack of awareness of the pitfalls of software engineering by scientists. The Applied Modelling and Computation Group at Imperial College is a diverse group of researchers that employ best practice software engineering methods whilst developing open source scientific software. Our main code is Fluidity - a multi-purpose computational fluid dynamics (CFD) code that can be used for a wide range of scientific applications from earth-scale mantle convection, through basin-scale ocean dynamics, to laboratory-scale classic CFD problems, and is coupled to a number of other codes including nuclear radiation and solid modelling. Our software development infrastructure consists of a number of free tools that could be employed by any group that develops scientific code and has been developed over a number of years with many lessons learnt. A single code base is developed by over 30 people for which we use bazaar for revision control, making good use of the strong branching and merging capabilities. Using features of Canonical's Launchpad platform, such as code review, blueprints for designing features and bug reporting gives the group, partners and other Fluidity uers an easy-to-use platform to collaborate and allows the induction of new members of the group into an environment where software development forms a central part of their work. The code repositoriy are coupled to an automated test and verification system which performs over 20,000 tests, including unit tests, short regression tests, code verification and large parallel tests. Included in these tests are build tests on HPC systems, including local and UK National HPC services. The testing of code in this manner leads to a continuous verification process; not a discrete event performed once development has ceased. Much of the code verification is done via the "gold standard" of comparisons to analytical solutions via the method of manufactured solutions. By developing and verifying code in tandem we avoid a number of pitfalls in scientific software development and advocate similar procedures for other scientific code applications.
Detection and avoidance of errors in computer software
NASA Technical Reports Server (NTRS)
Kinsler, Les
1989-01-01
The acceptance test errors of a computer software project to determine if the errors could be detected or avoided in earlier phases of development. GROAGSS (Gamma Ray Observatory Attitude Ground Support System) was selected as the software project to be examined. The development of the software followed the standard Flight Dynamics Software Development methods. GROAGSS was developed between August 1985 and April 1989. The project is approximately 250,000 lines of code of which approximately 43,000 lines are reused from previous projects. GROAGSS had a total of 1715 Change Report Forms (CRFs) submitted during the entire development and testing. These changes contained 936 errors. Of these 936 errors, 374 were found during the acceptance testing. These acceptance test errors were first categorized into methods of avoidance including: more clearly written requirements; detail review; code reading; structural unit testing; and functional system integration testing. The errors were later broken down in terms of effort to detect and correct, class of error, and probability that the prescribed detection method would be successful. These determinations were based on Software Engineering Laboratory (SEL) documents and interviews with the project programmers. A summary of the results of the categorizations is presented. The number of programming errors at the beginning of acceptance testing can be significantly reduced. The results of the existing development methodology are examined for ways of improvements. A basis is provided for the definition is a new development/testing paradigm. Monitoring of the new scheme will objectively determine its effectiveness on avoiding and detecting errors.
Test Driven Development of Scientific Models
NASA Technical Reports Server (NTRS)
Clune, Thomas L.
2014-01-01
Test-Driven Development (TDD), a software development process that promises many advantages for developer productivity and software reliability, has become widely accepted among professional software engineers. As the name suggests, TDD practitioners alternate between writing short automated tests and producing code that passes those tests. Although this overly simplified description will undoubtedly sound prohibitively burdensome to many uninitiated developers, the advent of powerful unit-testing frameworks greatly reduces the effort required to produce and routinely execute suites of tests. By testimony, many developers find TDD to be addicting after only a few days of exposure, and find it unthinkable to return to previous practices.After a brief overview of the TDD process and my experience in applying the methodology for development activities at Goddard, I will delve more deeply into some of the challenges that are posed by numerical and scientific software as well as tools and implementation approaches that should address those challenges.
[Application of Stata software to test heterogeneity in meta-analysis method].
Wang, Dan; Mou, Zhen-yun; Zhai, Jun-xia; Zong, Hong-xia; Zhao, Xiao-dong
2008-07-01
To introduce the application of Stata software to heterogeneity test in meta-analysis. A data set was set up according to the example in the study, and the corresponding commands of the methods in Stata 9 software were applied to test the example. The methods used were Q-test and I2 statistic attached to the fixed effect model forest plot, H statistic and Galbraith plot. The existence of the heterogeneity among studies could be detected by Q-test and H statistic and the degree of the heterogeneity could be detected by I2 statistic. The outliers which were the sources of the heterogeneity could be spotted from the Galbraith plot. Heterogeneity test in meta-analysis can be completed by the four methods in Stata software simply and quickly. H and I2 statistics are more robust, and the outliers of the heterogeneity can be clearly seen in the Galbraith plot among the four methods.
Model-Based Development of Automotive Electronic Climate Control Software
NASA Astrophysics Data System (ADS)
Kakade, Rupesh; Murugesan, Mohan; Perugu, Bhupal; Nair, Mohanan
With increasing complexity of software in today's products, writing and maintaining thousands of lines of code is a tedious task. Instead, an alternative methodology must be employed. Model-based development is one candidate that offers several benefits and allows engineers to focus on the domain of their expertise than writing huge codes. In this paper, we discuss the application of model-based development to the electronic climate control software of vehicles. The back-to-back testing approach is presented that ensures flawless and smooth transition from legacy designs to the model-based development. Simulink report generator to create design documents from the models is presented along with its usage to run the simulation model and capture the results into the test report. Test automation using model-based development tool that support the use of unique set of test cases for several testing levels and the test procedure that is independent of software and hardware platform is also presented.
Orbit attitude processor. STS-1 bench program verification test plan
NASA Technical Reports Server (NTRS)
Mcclain, C. R.
1980-01-01
A plan for the static verification of the STS-1 ATT PROC ORBIT software requirements is presented. The orbit version of the SAPIENS bench program is used to generate the verification data. A brief discussion of the simulation software and flight software modules is presented along with a description of the test cases.
Pettit performs the EPIC Card Testing and X2R10 Software Transition
2011-12-28
ISS030-E-022574 (28 Dec. 2011) -- NASA astronaut Don Pettit (foreground),Expedition 30 flight engineer, performs the Enhanced Processor and Integrated Communications (EPIC) card testing and X2R10 software transition. The software transition work will include EPIC card testing and card installations, and monitoring of the upgraded Multiplexer/ Demultiplexer (MDM) computers. Dan Burbank, Expedition 30 commander, is setting up a camcorder in the background.
Pettit performs the EPIC Card Testing and X2R10 Software Transition
2011-12-28
ISS030-E-022575 (28 Dec. 2011) -- NASA astronaut Don Pettit (foreground),Expedition 30 flight engineer, performs the Enhanced Processor and Integrated Communications (EPIC) card testing and X2R10 software transition. The software transition work will include EPIC card testing and card installations, and monitoring of the upgraded Multiplexer/ Demultiplexer (MDM) computers. Dan Burbank, Expedition 30 commander, is setting up a camcorder in the background.
Tank Monitor and Control System (TMACS) Rev 11.0 Acceptance Test Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
HOLM, M.J.
The purpose of this document is to describe tests performed to validate Revision 11 of the TMACS Monitor and Control System (TMACS) and verify that the software functions as intended by design. This document is intended to test the software portion of TMACS. The tests will be performed on the development system. The software to be tested is the TMACS knowledge bases (KB) and the I/O driver/services. The development system will not be talking to field equipment; instead, the field equipment is simulated using emulators or multiplexers in the lab.
Computational Simulations and the Scientific Method
NASA Technical Reports Server (NTRS)
Kleb, Bil; Wood, Bill
2005-01-01
As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.
Smith, M.; Murphy, D.; Laxmisan, A.; Sittig, D.; Reis, B.; Esquivel, A.; Singh, H.
2013-01-01
Summary Background Abnormal test results do not always receive timely follow-up, even when providers are notified through electronic health record (EHR)-based alerts. High workload, alert fatigue, and other demands on attention disrupt a provider’s prospective memory for tasks required to initiate follow-up. Thus, EHR-based tracking and reminding functionalities are needed to improve follow-up. Objectives The purpose of this study was to develop a decision-support software prototype enabling individual and system-wide tracking of abnormal test result alerts lacking follow-up, and to conduct formative evaluations, including usability testing. Methods We developed a working prototype software system, the Alert Watch And Response Engine (AWARE), to detect abnormal test result alerts lacking documented follow-up, and to present context-specific reminders to providers. Development and testing took place within the VA’s EHR and focused on four cancer-related abnormal test results. Design concepts emphasized mitigating the effects of high workload and alert fatigue while being minimally intrusive. We conducted a multifaceted formative evaluation of the software, addressing fit within the larger socio-technical system. Evaluations included usability testing with the prototype and interview questions about organizational and workflow factors. Participants included 23 physicians, 9 clinical information technology specialists, and 8 quality/safety managers. Results Evaluation results indicated that our software prototype fit within the technical environment and clinical workflow, and physicians were able to use it successfully. Quality/safety managers reported that the tool would be useful in future quality assurance activities to detect patients who lack documented follow-up. Additionally, we successfully installed the software on the local facility’s “test” EHR system, thus demonstrating technical compatibility. Conclusion To address the factors involved in missed test results, we developed a software prototype to account for technical, usability, organizational, and workflow needs. Our evaluation has shown the feasibility of the prototype as a means of facilitating better follow-up for cancer-related abnormal test results. PMID:24155789
SAGA: A project to automate the management of software production systems
NASA Technical Reports Server (NTRS)
Campbell, Roy H.; Laliberte, D.; Render, H.; Sum, R.; Smith, W.; Terwilliger, R.
1987-01-01
The Software Automation, Generation and Administration (SAGA) project is investigating the design and construction of practical software engineering environments for developing and maintaining aerospace systems and applications software. The research includes the practical organization of the software lifecycle, configuration management, software requirements specifications, executable specifications, design methodologies, programming, verification, validation and testing, version control, maintenance, the reuse of software, software libraries, documentation, and automated management.
Designing Control System Application Software for Change
NASA Technical Reports Server (NTRS)
Boulanger, Richard
2001-01-01
The Unified Modeling Language (UML) was used to design the Environmental Systems Test Stand (ESTS) control system software. The UML was chosen for its ability to facilitate a clear dialog between software designer and customer, from which requirements are discovered and documented in a manner which transposes directly to program objects. Applying the UML to control system software design has resulted in a baseline set of documents from which change and effort of that change can be accurately measured. As the Environmental Systems Test Stand evolves, accurate estimates of the time and effort required to change the control system software will be made. Accurate quantification of the cost of software change can be before implementation, improving schedule and budget accuracy.
Workstation-Based Avionics Simulator to Support Mars Science Laboratory Flight Software Development
NASA Technical Reports Server (NTRS)
Henriquez, David; Canham, Timothy; Chang, Johnny T.; McMahon, Elihu
2008-01-01
The Mars Science Laboratory developed the WorkStation TestSet (WSTS) to support flight software development. The WSTS is the non-real-time flight avionics simulator that is designed to be completely software-based and run on a workstation class Linux PC. This provides flight software developers with their own virtual avionics testbed and allows device-level and functional software testing when hardware testbeds are either not yet available or have limited availability. The WSTS has successfully off-loaded many flight software development activities from the project testbeds. At the writing of this paper, the WSTS has averaged an order of magnitude more usage than the project's hardware testbeds.
Integrated testing and verification system for research flight software design document
NASA Technical Reports Server (NTRS)
Taylor, R. N.; Merilatt, R. L.; Osterweil, L. J.
1979-01-01
The NASA Langley Research Center is developing the MUST (Multipurpose User-oriented Software Technology) program to cut the cost of producing research flight software through a system of software support tools. The HAL/S language is the primary subject of the design. Boeing Computer Services Company (BCS) has designed an integrated verification and testing capability as part of MUST. Documentation, verification and test options are provided with special attention on real time, multiprocessing issues. The needs of the entire software production cycle have been considered, with effective management and reduced lifecycle costs as foremost goals. Capabilities have been included in the design for static detection of data flow anomalies involving communicating concurrent processes. Some types of ill formed process synchronization and deadlock also are detected statically.
78 FR 1162 - Cardiovascular Devices; Reclassification of External Cardiac Compressor
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-08
... safety and electromagnetic compatibility; For devices containing software, software verification... electromagnetic compatibility; For devices containing software, software verification, validation, and hazard... electrical components, appropriate analysis and testing must validate electrical safety and electromagnetic...
Practical Issues in Implementing Software Reliability Measurement
NASA Technical Reports Server (NTRS)
Nikora, Allen P.; Schneidewind, Norman F.; Everett, William W.; Munson, John C.; Vouk, Mladen A.; Musa, John D.
1999-01-01
Many ways of estimating software systems' reliability, or reliability-related quantities, have been developed over the past several years. Of particular interest are methods that can be used to estimate a software system's fault content prior to test, or to discriminate between components that are fault-prone and those that are not. The results of these methods can be used to: 1) More accurately focus scarce fault identification resources on those portions of a software system most in need of it. 2) Estimate and forecast the risk of exposure to residual faults in a software system during operation, and develop risk and safety criteria to guide the release of a software system to fielded use. 3) Estimate the efficiency of test suites in detecting residual faults. 4) Estimate the stability of the software maintenance process.
Hardware and Software Integration to Support Real-Time Space Link Emulation
NASA Technical Reports Server (NTRS)
Murawski, Robert; Bhasin, Kul; Bittner, David; Sweet, Aaron; Coulter, Rachel; Schwab, Devin
2012-01-01
Prior to operational use, communications hardware and software must be thoroughly tested and verified. In space-link communications, field testing equipment can be prohibitively expensive and cannot test to non-ideal situations. In this paper, we show how software and hardware emulation tools can be used to accurately model the characteristics of a satellite communication channel in a lab environment. We describe some of the challenges associated with developing an emulation lab and present results to demonstrate the channel modeling. We then show how network emulation software can be used to extend a hardware emulation model without requiring additional network and channel simulation hardware.
Hardware and Software Integration to Support Real-Time Space-Link Emulation
NASA Technical Reports Server (NTRS)
Murawski, Robert; Bhasin, Kul; Bittner, David
2012-01-01
Prior to operational use, communications hardware and software must be thoroughly tested and verified. In space-link communications, field testing equipment can be prohibitively expensive and cannot test to non-ideal situations. In this paper, we show how software and hardware emulation tools can be used to accurately model the characteristics of a satellite communication channel in a lab environment. We describe some of the challenges associated with developing an emulation lab and present results to demonstrate the channel modeling. We then show how network emulation software can be used to extend a hardware emulation model without requiring additional network and channel simulation hardware.
The use of applied software for the professional training of students studying humanities
NASA Astrophysics Data System (ADS)
Sadchikova, A. S.; Rodin, M. M.
2017-01-01
Research practice is an integral part of humanities students' training process. In this regard the training process is to include modern information techniques of the training process of students studying humanities. This paper examines the most popular applied software products used for data processing in social science. For testing purposes we selected the most commonly preferred professional packages: MS Excel, IBM SPSS Statistics, STATISTICA, STADIA. Moreover the article contains testing results of a specialized software Prikladnoy Sotsiolog that is applicable for the preparation stage of the research. The specialised software were tested during one term in groups of students studying humanities.
STGT program: Ada coding and architecture lessons learned
NASA Technical Reports Server (NTRS)
Usavage, Paul; Nagurney, Don
1992-01-01
STGT (Second TDRSS Ground Terminal) is currently halfway through the System Integration Test phase (Level 4 Testing). To date, many software architecture and Ada language issues have been encountered and solved. This paper, which is the transcript of a presentation at the 3 Dec. meeting, attempts to define these lessons plus others learned regarding software project management and risk management issues, training, performance, reuse, and reliability. Observations are included regarding the use of particular Ada coding constructs, software architecture trade-offs during the prototyping, development and testing stages of the project, and dangers inherent in parallel or concurrent systems, software, hardware, and operations engineering.
Software Reliability Analysis of NASA Space Flight Software: A Practical Experience
Sukhwani, Harish; Alonso, Javier; Trivedi, Kishor S.; Mcginnis, Issac
2017-01-01
In this paper, we present the software reliability analysis of the flight software of a recently launched space mission. For our analysis, we use the defect reports collected during the flight software development. We find that this software was developed in multiple releases, each release spanning across all software life-cycle phases. We also find that the software releases were developed and tested for four different hardware platforms, spanning from off-the-shelf or emulation hardware to actual flight hardware. For releases that exhibit reliability growth or decay, we fit Software Reliability Growth Models (SRGM); otherwise we fit a distribution function. We find that most releases exhibit reliability growth, with Log-Logistic (NHPP) and S-Shaped (NHPP) as the best-fit SRGMs. For the releases that experience reliability decay, we investigate the causes for the same. We find that such releases were the first software releases to be tested on a new hardware platform, and hence they encountered major hardware integration issues. Also such releases seem to have been developed under time pressure in order to start testing on the new hardware platform sooner. Such releases exhibit poor reliability growth, and hence exhibit high predicted failure rate. Other problems include hardware specification changes and delivery delays from vendors. Thus, our analysis provides critical insights and inputs to the management to improve the software development process. As NASA has moved towards a product line engineering for its flight software development, software for future space missions will be developed in a similar manner and hence the analysis results for this mission can be considered as a baseline for future flight software missions. PMID:29278255
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.
1993-01-01
The Power Control and Rain Fade Software was developed at the NASA Lewis Research Center to support the Advanced Communications Technology Satellite High Burst Rate Link Evaluation Terminal (ACTS HBR-LET). The HBR-LET is an experimenters terminal to communicate with the ACTS for various experiments by government, university, and industry agencies. The Power Control and Rain Fade Software is one segment of the Control and Performance Monitor (C&PM) Software system of the HBR-LET. The Power Control and Rain Fade Software automatically controls the LET uplink power to compensate for signal fades. Besides power augmentation, the C&PM Software system is also responsible for instrument control during HBR-LET experiments, control of the Intermediate Frequency Switch Matrix on board the ACTS to yield a desired path through the spacecraft payload, and data display. The Power Control and Rain Fade Software User's Guide, Version 1.0 outlines the commands and procedures to install and operate the Power Control and Rain Fade Software. The Power Control and Rain Fade Software Maintenance Manual, Version 1.0 is a programmer's guide to the Power Control and Rain Fade Software. This manual details the current implementation of the software from a technical perspective. Included is an overview of the Power Control and Rain Fade Software, computer algorithms, format representations, and computer hardware configuration. The Power Control and Rain Fade Test Plan provides a step-by-step procedure to verify the operation of the software using a predetermined signal fade event. The Test Plan also provides a means to demonstrate the capability of the software.
Software Reliability Analysis of NASA Space Flight Software: A Practical Experience.
Sukhwani, Harish; Alonso, Javier; Trivedi, Kishor S; Mcginnis, Issac
2016-01-01
In this paper, we present the software reliability analysis of the flight software of a recently launched space mission. For our analysis, we use the defect reports collected during the flight software development. We find that this software was developed in multiple releases, each release spanning across all software life-cycle phases. We also find that the software releases were developed and tested for four different hardware platforms, spanning from off-the-shelf or emulation hardware to actual flight hardware. For releases that exhibit reliability growth or decay, we fit Software Reliability Growth Models (SRGM); otherwise we fit a distribution function. We find that most releases exhibit reliability growth, with Log-Logistic (NHPP) and S-Shaped (NHPP) as the best-fit SRGMs. For the releases that experience reliability decay, we investigate the causes for the same. We find that such releases were the first software releases to be tested on a new hardware platform, and hence they encountered major hardware integration issues. Also such releases seem to have been developed under time pressure in order to start testing on the new hardware platform sooner. Such releases exhibit poor reliability growth, and hence exhibit high predicted failure rate. Other problems include hardware specification changes and delivery delays from vendors. Thus, our analysis provides critical insights and inputs to the management to improve the software development process. As NASA has moved towards a product line engineering for its flight software development, software for future space missions will be developed in a similar manner and hence the analysis results for this mission can be considered as a baseline for future flight software missions.
Modular Rocket Engine Control Software (MRECS)
NASA Technical Reports Server (NTRS)
Tarrant, Charlie; Crook, Jerry
1997-01-01
The Modular Rocket Engine Control Software (MRECS) Program is a technology demonstration effort designed to advance the state-of-the-art in launch vehicle propulsion systems. Its emphasis is on developing and demonstrating a modular software architecture for a generic, advanced engine control system that will result in lower software maintenance (operations) costs. It effectively accommodates software requirements changes that occur due to hardware. technology upgrades and engine development testing. Ground rules directed by MSFC were to optimize modularity and implement the software in the Ada programming language. MRECS system software and the software development environment utilize Commercial-Off-the-Shelf (COTS) products. This paper presents the objectives and benefits of the program. The software architecture, design, and development environment are described. MRECS tasks are defined and timing relationships given. Major accomplishment are listed. MRECS offers benefits to a wide variety of advanced technology programs in the areas of modular software, architecture, reuse software, and reduced software reverification time related to software changes. Currently, the program is focused on supporting MSFC in accomplishing a Space Shuttle Main Engine (SSME) hot-fire test at Stennis Space Center and the Low Cost Boost Technology (LCBT) Program.
Ffuzz: Towards full system high coverage fuzz testing on binary executables
2018-01-01
Bugs and vulnerabilities in binary executables threaten cyber security. Current discovery methods, like fuzz testing, symbolic execution and manual analysis, both have advantages and disadvantages when exercising the deeper code area in binary executables to find more bugs. In this paper, we designed and implemented a hybrid automatic bug finding tool—Ffuzz—on top of fuzz testing and selective symbolic execution. It targets full system software stack testing including both the user space and kernel space. Combining these two mainstream techniques enables us to achieve higher coverage and avoid getting stuck both in fuzz testing and symbolic execution. We also proposed two key optimizations to improve the efficiency of full system testing. We evaluated the efficiency and effectiveness of our method on real-world binary software and 844 memory corruption vulnerable programs in the Juliet test suite. The results show that Ffuzz can discover software bugs in the full system software stack effectively and efficiently. PMID:29791469
Manyak, Kristin A.; Abdenour, Thomas E.; Rauh, Mitchell J.; Baweja, Harsimran S.
2016-01-01
Background As recently dictated by the American Medical Society, balance testing is an important component in the clinical evaluation of concussion. Despite this, previous research on the efficacy of balance testing for concussion diagnosis suggests low sensitivity (∼30%), based primarily on the popular Balance Error Scoring System (BESS). The Balance Tracking System (BTrackS, Balance Tracking Systems Inc., San Diego, CA, USA) consists of a force plate (BTrackS Balance Plate) and software (BTrackS Sport Balance) which can quickly (<2 min) perform concussion balance testing with gold standard accuracy. Purpose The present study aimed to determine the sensitivity of the BTrackS Balance Plate and Sports Balance Software for concussion diagnosis. Study Design Cross-Sectional Study Methods Preseason baseline balance testing of 519 healthy Division I college athletes playing sports with a relatively high risk for concussions was performed with the BTrackS Balance Test. Testing was administered by certified athletic training staff using the BTrackS Balance Plate and Sport Balance software. Of the baselined athletes, 25 later experienced a concussion during the ensuing sport season. Post-injury balance testing was performed on these concussed athletes within 48 of injury and the sensitivity of the BTrackS Balance Plate and Sport Balance software was estimated based on the number of athletes showing a balance decline according to the criteria specified in the Sport Balance software. This criteria is based on the minimal detectable change statistic with a 90% confidence level (i.e. 90% specificity). Results Of 25 athletes who experienced concussions, 16 had balance declines relative to baseline testing results according to the BTrackS Sport Balance software criteria. This corresponds to an estimated concussion sensitivity of 64%, which is twice as great as that reported previously for the BESS. Conclusions The BTrackS Balance Plate and Sport Balance software has the greatest concussion sensitivity of any balance testing instrument reported to date. Level of Evidence Level 2 (Individual cross sectional diagnostic study) PMID:27104048
The NOvA software testing framework
NASA Astrophysics Data System (ADS)
Tamsett, M.; C Group
2015-12-01
The NOvA experiment at Fermilab is a long-baseline neutrino experiment designed to study vε appearance in a vμ beam. NOvA has already produced more than one million Monte Carlo and detector generated files amounting to more than 1 PB in size. This data is divided between a number of parallel streams such as far and near detector beam spills, cosmic ray backgrounds, a number of data-driven triggers and over 20 different Monte Carlo configurations. Each of these data streams must be processed through the appropriate steps of the rapidly evolving, multi-tiered, interdependent NOvA software framework. In total there are greater than 12 individual software tiers, each of which performs a different function and can be configured differently depending on the input stream. In order to regularly test and validate that all of these software stages are working correctly NOvA has designed a powerful, modular testing framework that enables detailed validation and benchmarking to be performed in a fast, efficient and accessible way with minimal expert knowledge. The core of this system is a novel series of python modules which wrap, monitor and handle the underlying C++ software framework and then report the results to a slick front-end web-based interface. This interface utilises modern, cross-platform, visualisation libraries to render the test results in a meaningful way. They are fast and flexible, allowing for the easy addition of new tests and datasets. In total upwards of 14 individual streams are regularly tested amounting to over 70 individual software processes, producing over 25 GB of output files. The rigour enforced through this flexible testing framework enables NOvA to rapidly verify configurations, results and software and thus ensure that data is available for physics analysis in a timely and robust manner.
RELAP-7 Software Verification and Validation Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Curtis L.; Choi, Yong-Joon; Zou, Ling
This INL plan comprehensively describes the software for RELAP-7 and documents the software, interface, and software design requirements for the application. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7. The RELAP-7 (Reactor Excursion and Leak Analysis Program) code is a nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework – MOOSE (Multi-Physics Object-Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty yearsmore » of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s capability and extends the analysis capability for all reactor system simulation scenarios.« less
NDAS Hardware Translation Layer Development
NASA Technical Reports Server (NTRS)
Nazaretian, Ryan N.; Holladay, Wendy T.
2011-01-01
The NASA Data Acquisition System (NDAS) project is aimed to replace all DAS software for NASA s Rocket Testing Facilities. There must be a software-hardware translation layer so the software can properly talk to the hardware. Since the hardware from each test stand varies, drivers for each stand have to be made. These drivers will act more like plugins for the software. If the software is being used in E3, then the software should point to the E3 driver package. If the software is being used at B2, then the software should point to the B2 driver package. The driver packages should also be filled with hardware drivers that are universal to the DAS system. For example, since A1, A2, and B2 all use the Preston 8300AU signal conditioners, then the driver for those three stands should be the same and updated collectively.
Validation of software for calculating the likelihood ratio for parentage and kinship.
Drábek, J
2009-03-01
Although the likelihood ratio is a well-known statistical technique, commercial off-the-shelf (COTS) software products for its calculation are not sufficiently validated to suit general requirements for the competence of testing and calibration laboratories (EN/ISO/IEC 17025:2005 norm) per se. The software in question can be considered critical as it directly weighs the forensic evidence allowing judges to decide on guilt or innocence or to identify person or kin (i.e.: in mass fatalities). For these reasons, accredited laboratories shall validate likelihood ratio software in accordance with the above norm. To validate software for calculating the likelihood ratio in parentage/kinship scenarios I assessed available vendors, chose two programs (Paternity Index and familias) for testing, and finally validated them using tests derived from elaboration of the available guidelines for the field of forensics, biomedicine, and software engineering. MS Excel calculation using known likelihood ratio formulas or peer-reviewed results of difficult paternity cases were used as a reference. Using seven testing cases, it was found that both programs satisfied the requirements for basic paternity cases. However, only a combination of two software programs fulfills the criteria needed for our purpose in the whole spectrum of functions under validation with the exceptions of providing algebraic formulas in cases of mutation and/or silent allele.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Yong Joon; Yoo, Jun Soo; Smith, Curtis Lee
2015-09-01
This INL plan comprehensively describes the Requirements Traceability Matrix (RTM) on main physics and numerical method of the RELAP-7. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7.
NASA Technical Reports Server (NTRS)
French, Scott W.
1991-01-01
The goals are to show that verifying and validating a software system is a required part of software development and has a direct impact on the software's design and structure. Workshop tasks are given in the areas of statistics, integration/system test, unit and architectural testing, and a traffic controller problem.
Proceedings of the Eighth Annual Software Engineering Workshop
NASA Technical Reports Server (NTRS)
1983-01-01
The four major topics of discussion included: the NASA Software Engineering Laboratory, software testing, human factors in software engineering and software quality assessment. As in the past years, there were 12 position papers presented (3 for each topic) followed by questions and very heavy participation by the general audience.
Guidelines for testing and release procedures
NASA Technical Reports Server (NTRS)
Molari, R.; Conway, M.
1984-01-01
Guidelines and procedures are recommended for the testing and release of the types of computer software efforts commonly performed at NASA/Ames Research Center. All recommendations are based on the premise that testing and release activities must be specifically selected for the environment, size, and purpose of each individual software project. Guidelines are presented for building a Test Plan and using formal Test Plan and Test Care Inspections on it. Frequent references are made to NASA/Ames Guidelines for Software Inspections. Guidelines are presented for selecting an Overall Test Approach and for each of the four main phases of testing: (1) Unit Testing of Components, (2) Integration Testing of Components, (3) System Integration Testing, and (4) Acceptance Testing. Tools used for testing are listed, including those available from operating systems used at Ames, specialized tools which can be developed, unit test drivers, stub module generators, and the use of format test reporting schemes.
Support for Diagnosis of Custom Computer Hardware
NASA Technical Reports Server (NTRS)
Molock, Dwaine S.
2008-01-01
The Coldfire SDN Diagnostics software is a flexible means of exercising, testing, and debugging custom computer hardware. The software is a set of routines that, collectively, serve as a common software interface through which one can gain access to various parts of the hardware under test and/or cause the hardware to perform various functions. The routines can be used to construct tests to exercise, and verify the operation of, various processors and hardware interfaces. More specifically, the software can be used to gain access to memory, to execute timer delays, to configure interrupts, and configure processor cache, floating-point, and direct-memory-access units. The software is designed to be used on diverse NASA projects, and can be customized for use with different processors and interfaces. The routines are supported, regardless of the architecture of a processor that one seeks to diagnose. The present version of the software is configured for Coldfire processors on the Subsystem Data Node processor boards of the Solar Dynamics Observatory. There is also support for the software with respect to Mongoose V, RAD750, and PPC405 processors or their equivalents.
The Mars Science Laboratory Entry, Descent, and Landing Flight Software
NASA Technical Reports Server (NTRS)
Gostelow, Kim P.
2013-01-01
This paper describes the design, development, and testing of the EDL program from the perspective of the software engineer. We briefly cover the overall MSL flight software organization, and then the organization of EDL itself. We discuss the timeline, the structure of the GNC code (but not the algorithms as they are covered elsewhere in this conference) and the command and telemetry interfaces. Finally, we cover testing and the influence that testability had on the EDL flight software design.
Top Down Implementation Plan for system performance test software
NASA Technical Reports Server (NTRS)
Jacobson, G. N.; Spinak, A.
1982-01-01
The top down implementation plan used for the development of system performance test software during the Mark IV-A era is described. The plan is based upon the identification of the hierarchical relationship of the individual elements of the software design, the development of a sequence of functionally oriented demonstrable steps, the allocation of subroutines to the specific step where they are first required, and objective status reporting. The results are: determination of milestones, improved managerial visibility, better project control, and a successful software development.
Enhanced Master Controller Unit Tester
NASA Technical Reports Server (NTRS)
Benson, Patricia; Johnson, Yvette; Johnson, Brian; Williams, Philip; Burton, Geoffrey; McCoy, Anthony
2007-01-01
The Enhanced Master Controller Unit Tester (EMUT) software is a tool for development and testing of software for a master controller (MC) flight computer. The primary function of the EMUT software is to simulate interfaces between the MC computer and external analog and digital circuitry (including other computers) in a rack of equipment to be used in scientific experiments. The simulations span the range of nominal, off-nominal, and erroneous operational conditions, enabling the testing of MC software before all the equipment becomes available.
Estimation and enhancement of real-time software reliability through mutation analysis
NASA Technical Reports Server (NTRS)
Geist, Robert; Offutt, A. J.; Harris, Frederick C., Jr.
1992-01-01
A simulation-based technique for obtaining numerical estimates of the reliability of N-version, real-time software is presented. An extended stochastic Petri net is employed to represent the synchronization structure of N versions of the software, where dependencies among versions are modeled through correlated sampling of module execution times. Test results utilizing specifications for NASA's planetary lander control software indicate that mutation-based testing could hold greater potential for enhancing reliability than the desirable but perhaps unachievable goal of independence among N versions.
[Confirming the Utility of RAISUS Antifungal Susceptibility Testing by New-Software].
Ono, Tomoko; Suematsu, Hiroyuki; Sawamura, Haruki; Yamagishi, Yuka; Mikamo, Hiroshige
2017-08-15
Clinical and Laboratory Standards Institute (CLSI) methods for susceptibility tests of yeast are used in Japan. On the other hand, the methods have some disadvantage; 1) reading at 24 and 48 h, 2) using unclear scale, approximately 50% inhibition, to determine MICs, 3) calculating trailing growth and paradoxical effects. These makes it difficult to test the susuceptibility for yeasts. Old software of RAISUS, Ver. 6.0 series, resolved problem 1) and 2) but did not resolve problem 3). Recently, new software of RAISUS, Ver. 7.0 series, resolved problem 3). We confirmed that using the new software made it clear whether all these issue were settled or not. Eighty-four Candida isolated from Aichi Medical University was used in this study. We compared the MICs obtained by using RAISUS antifungal susceptibility testing of yeasts RSMY1, RSMY1, with those obtained by using ASTY. The concordance rates (±four-fold of MICs) between the MICs obtained by using ASTY and RSMY1 with the new software were more than 90%, except for miconazole (MCZ). The rate of MCZ was low, but MICs obtained by using CLSI methods and Yeast-like Fungus DP 'EIKEN' methods, E-DP, showed equivalent MICs of RSMY1 using the new software. The frequency of skip effects on RSMY1 using the new software markedly decreased relative to RSMY1 using the old software. In case of showing trailing growth, the new software of RAISUS made it possible to choice the correct MICs and to put up the sign of trailing growth on the result screen. New software of RAISUS enhances its usability and the accuracy of MICs. Using automatic instrument to determine MICs is useful to obtain objective results easily.
Power, Avionics and Software - Phase 1.0:. [Subsystem Integration Test Report
NASA Technical Reports Server (NTRS)
Ivancic, William D.; Sands, Obed S.; Bakula, Casey J.; Oldham, Daniel R.; Wright, Ted; Bradish, Martin A.; Klebau, Joseph M.
2014-01-01
This report describes Power, Avionics and Software (PAS) 1.0 subsystem integration testing and test results that occurred in August and September of 2013. This report covers the capabilities of each PAS assembly to meet integration test objectives for non-safety critical, non-flight, non-human-rated hardware and software development. This test report is the outcome of the first integration of the PAS subsystem and is meant to provide data for subsequent designs, development and testing of the future PAS subsystems. The two main objectives were to assess the ability of the PAS assemblies to exchange messages and to perform audio testing of both inbound and outbound channels. This report describes each test performed, defines the test, the data, and provides conclusions and recommendations.
Absorbing Software Testing into the Scrum Method
NASA Astrophysics Data System (ADS)
Tuomikoski, Janne; Tervonen, Ilkka
In this paper we study, how to absorb software testing into the Scrum method. We conducted the research as an action research during the years 2007-2008 with three iterations. The result showed that testing can and even should be absorbed to the Scrum method. The testing team was merged into the Scrum teams. The teams can now deliver better working software in a shorter time, because testing keeps track of the progress of the development. Also the team spirit is higher, because the Scrum team members are committed to the same goal. The biggest change from test manager’s point of view was the organized Product Owner Team. Test manager don’t have testing team anymore, and in the future all the testing tasks have to be assigned through the Product Backlog.
A test matrix sequencer for research test facility automation
NASA Technical Reports Server (NTRS)
Mccartney, Timothy P.; Emery, Edward F.
1990-01-01
The hardware and software configuration of a Test Matrix Sequencer, a general purpose test matrix profiler that was developed for research test facility automation at the NASA Lewis Research Center, is described. The system provides set points to controllers and contact closures to data systems during the course of a test. The Test Matrix Sequencer consists of a microprocessor controlled system which is operated from a personal computer. The software program, which is the main element of the overall system is interactive and menu driven with pop-up windows and help screens. Analog and digital input/output channels can be controlled from a personal computer using the software program. The Test Matrix Sequencer provides more efficient use of aeronautics test facilities by automating repetitive tasks that were once done manually.
NASA Astrophysics Data System (ADS)
Plasson, Ph.
2006-11-01
LESIA, in close cooperation with CNES, DLR and IWF, is responsible for the tests and validation of the CoRoT instrument digital process unit which is made up of the BEX and DPU assembly. The main part of the work has consisted in validating the DPU software and in testing the BEX/DPU coupling. This work took more than two years due to the central role of the software tested and its technical complexity. The first task, in the validation process, was to carry out the acceptance tests of the DPU software. These tests consisted in checking each of the 325 requirements identified in the URD (User Requirements Document) and were played in a configuration using the DPU coupled to a BEX simulator. During the acceptance tests, all the transversal functionalities of the DPU software, like the TC/TM management, the state machine management, the BEX driving, the system monitoring or the maintenance functionalities were checked in depth. The functionalities associated with the seismology and exoplanetology processing, like the loading of window and mask descriptors or the configuration of the service execution parameters, were also exhaustively tested. After having validated the DPU software against the user requirements using a BEX simulator, the following step consisted in coupling the DPU and the BEX in order to check that the formed unit worked correctly and met the performance requirements. These tests were conducted in two phases: the first one was devoted to the functional aspects and the tests of interface, the second one to the performance aspects. The performance tests were based on the use of the DPU software scientific services and on the use of full images representative of a realistic sky as inputs. These tests were also based on the use of a reference set of windows and parameters, which was provided by the scientific team and was representative, in terms of load and complexity, of the one that could be used during the observation mode of the CoRoT instrument. Theywere played in a configuration using either a BCC simulator or a real BCC coupled to a video simulator, to feed the BEX/DPU unit. The validation of the scientific algorithms was conducted in parallel to the phase of the BEX/DPU coupling tests. The objective of this phase was to check that the algorithms implemented in the scientific services of the DPU software were in good conformity with those specified in the URD and that the obtained numerical precision corresponded to that expected. Forty cases of tests were defined covering the fine and rough angular error measurement processing, the rejection of the brilliant pixels, the subtraction of the offset and the sky background, the photometry algorithms, the SAA handling and reference image management. For each test case, the LESIA scientific team produced, by simulation, using the model instrument, the dynamic data files and the parameter sets allowing to feed the DPU on the one hand, and, on the other hand, a model of the onboard software. These data files correspond to FITS images (black windows, star windows, offset windows) containing more or less disturbances and making it possible to test the DPU software in dynamic mode over durations of up to 48 hours. To perform the test and validation activities of the CoRoT instrument digital process unit, a set of software testing tools was developed by LESIA (Software Ground Support Equipment, hereafter "SGSE"). Thanks to their versatility and modularity, these software testing tools were actually used during all the activities of integration, tests and validation of the instrument and its subsystems CoRoTCase and CoRoTCam. The CoRoT SGSE were specified, designed and developed by LESIA. The objective was to have a software system allowing the users (validation team of the onboard software, instrument integration team, etc.) to remotely control and monitor the whole instrument or only one of the subsystems of the instrument like the DPU coupled to a simulator BEX or the BEX/DPU unit coupled to a BCC simulator. The idea was to be able to interact in real time with the system under test by driving the various EGSE, but also to play test procedures implemented as scripts organized into libraries, to record the telemetries and housekeeping data in a database, and to be able to carry out post-mortem analyses.
NASA Technical Reports Server (NTRS)
Currit, P. A.
1983-01-01
The Cleanroom software development methodology is designed to take the gamble out of product releases for both suppliers and receivers of the software. The ingredients of this procedure are a life cycle of executable product increments, representative statistical testing, and a standard estimate of the MTTF (Mean Time To Failure) of the product at the time of its release. A statistical approach to software product testing using randomly selected samples of test cases is considered. A statistical model is defined for the certification process which uses the timing data recorded during test. A reasonableness argument for this model is provided that uses previously published data on software product execution. Also included is a derivation of the certification model estimators and a comparison of the proposed least squares technique with the more commonly used maximum likelihood estimators.
Development of a flight software testing methodology
NASA Technical Reports Server (NTRS)
Mccluskey, E. J.; Andrews, D. M.
1985-01-01
The research to develop a testing methodology for flight software is described. An experiment was conducted in using assertions to dynamically test digital flight control software. The experiment showed that 87% of typical errors introduced into the program would be detected by assertions. Detailed analysis of the test data showed that the number of assertions needed to detect those errors could be reduced to a minimal set. The analysis also revealed that the most effective assertions tested program parameters that provided greater indirect (collateral) testing of other parameters. In addition, a prototype watchdog task system was built to evaluate the effectiveness of executing assertions in parallel by using the multitasking features of Ada.
Effectiveness of back-to-back testing
NASA Technical Reports Server (NTRS)
Vouk, Mladen A.; Mcallister, David F.; Eckhardt, David E.; Caglayan, Alper; Kelly, John P. J.
1987-01-01
Three models of back-to-back testing processes are described. Two models treat the case where there is no intercomponent failure dependence. The third model describes the more realistic case where there is correlation among the failure probabilities of the functionally equivalent components. The theory indicates that back-to-back testing can, under the right conditions, provide a considerable gain in software reliability. The models are used to analyze the data obtained in a fault-tolerant software experiment. It is shown that the expected gain is indeed achieved, and exceeded, provided the intercomponent failure dependence is sufficiently small. However, even with the relatively high correlation the use of several functionally equivalent components coupled with back-to-back testing may provide a considerable reliability gain. Implications of this finding are that the multiversion software development is a feasible and cost effective approach to providing highly reliable software components intended for fault-tolerant software systems, on condition that special attention is directed at early detection and elimination of correlated faults.
Virtual rough samples to test 3D nanometer-scale scanning electron microscopy stereo photogrammetry.
Villarrubia, J S; Tondare, V N; Vladár, A E
2016-01-01
The combination of scanning electron microscopy for high spatial resolution, images from multiple angles to provide 3D information, and commercially available stereo photogrammetry software for 3D reconstruction offers promise for nanometer-scale dimensional metrology in 3D. A method is described to test 3D photogrammetry software by the use of virtual samples-mathematical samples from which simulated images are made for use as inputs to the software under test. The virtual sample is constructed by wrapping a rough skin with any desired power spectral density around a smooth near-trapezoidal line with rounded top corners. Reconstruction is performed with images simulated from different angular viewpoints. The software's reconstructed 3D model is then compared to the known geometry of the virtual sample. Three commercial photogrammetry software packages were tested. Two of them produced results for line height and width that were within close to 1 nm of the correct values. All of the packages exhibited some difficulty in reconstructing details of the surface roughness.
Overview of T.E.S.T. (Toxicity Estimation Software Tool)
This talk provides an overview of T.E.S.T. (Toxicity Estimation Software Tool). T.E.S.T. predicts toxicity values and physical properties using a variety of different QSAR (quantitative structure activity relationship) approaches including hierarchical clustering, group contribut...
2010-01-01
Symantec Server Antivirus 1 1 1 1 2 7 8 8 Service Passwords 0 10 4 4 4 10 5 5 Banner Needs 0 0 0 0 0 0 0 0 Unauthorized Software 0 1 0 1 4 1 4 1... software needed to manage and operate systems in the testing rooms. Systems in the testing rooms were made to resemble shipboard Navy systems as closely...i.e., work- station and server software , routing and switching, operating systems, and so forth). This training was also designed to provide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Hang Bae
A reliability testing was performed for the software of Shutdown(SDS) Computers for Wolsong Nuclear Power Plants Units 2, 3 and 4. profiles to the SDS Computers and compared the outputs with the predicted results generated by the oracle. Test softwares were written to execute the test automatically. Random test profiles were generated using analysis code. 11 refs., 1 fig.
Automated Source-Code-Based Testing of Object-Oriented Software
NASA Astrophysics Data System (ADS)
Gerlich, Ralf; Gerlich, Rainer; Dietrich, Carsten
2014-08-01
With the advent of languages such as C++ and Java in mission- and safety-critical space on-board software, new challenges for testing and specifically automated testing arise. In this paper we discuss some of these challenges, consequences and solutions based on an experiment in automated source- code-based testing for C++.
Injecting Errors for Testing Built-In Test Software
NASA Technical Reports Server (NTRS)
Gender, Thomas K.; Chow, James
2010-01-01
Two algorithms have been conceived to enable automated, thorough testing of Built-in test (BIT) software. The first algorithm applies to BIT routines that define pass/fail criteria based on values of data read from such hardware devices as memories, input ports, or registers. This algorithm simulates effects of errors in a device under test by (1) intercepting data from the device and (2) performing AND operations between the data and the data mask specific to the device. This operation yields values not expected by the BIT routine. This algorithm entails very small, permanent instrumentation of the software under test (SUT) for performing the AND operations. The second algorithm applies to BIT programs that provide services to users application programs via commands or callable interfaces and requires a capability for test-driver software to read and write the memory used in execution of the SUT. This algorithm identifies all SUT code execution addresses where errors are to be injected, then temporarily replaces the code at those addresses with small test code sequences to inject latent severe errors, then determines whether, as desired, the SUT detects the errors and recovers
Automation Hooks Architecture Trade Study for Flexible Test Orchestration
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin A.; Maclean, John R.; Graffagnino, Frank J.; McCartney, Patrick A.
2010-01-01
We describe the conclusions of a technology and communities survey supported by concurrent and follow-on proof-of-concept prototyping to evaluate feasibility of defining a durable, versatile, reliable, visible software interface to support strategic modularization of test software development. The objective is that test sets and support software with diverse origins, ages, and abilities can be reliably integrated into test configurations that assemble and tear down and reassemble with scalable complexity in order to conduct both parametric tests and monitored trial runs. The resulting approach is based on integration of three recognized technologies that are currently gaining acceptance within the test industry and when combined provide a simple, open and scalable test orchestration architecture that addresses the objectives of the Automation Hooks task. The technologies are automated discovery using multicast DNS Zero Configuration Networking (zeroconf), commanding and data retrieval using resource-oriented Restful Web Services, and XML data transfer formats based on Automatic Test Markup Language (ATML). This open-source standards-based approach provides direct integration with existing commercial off-the-shelf (COTS) analysis software tools.
NASA Technical Reports Server (NTRS)
Lowman, Douglas S.; Withers, B. Edward; Shagnea, Anita M.; Dent, Leslie A.; Hayhurst, Kelly J.
1990-01-01
A variety of instructions to be used in the development of implementations of software for the Guidance and Control Software (GCS) project is described. This document fulfills the Radio Technical Commission for Aeronautics RTCA/DO-178A guidelines, 'Software Considerations in Airborne Systems and Equipment Certification' requirements for document No. 4, which specifies the information necessary for understanding and programming the host computer, and document No. 12, which specifies the software design and implementation standards that are applicable to the software development and testing process. Information on the following subjects is contained: activity recording, communication protocol, coding standards, change management, error handling, design standards, problem reporting, module testing logs, documentation formats, accuracy requirements, and programmer responsibilities.
The software system development for the TAMU real-time fan beam scatterometer data processors
NASA Technical Reports Server (NTRS)
Clark, B. V.; Jean, B. R.
1980-01-01
A software package was designed and written to process in real-time any one quadrature channel pair of radar scatterometer signals form the NASA L- or C-Band radar scatterometer systems. The software was successfully tested in the C-Band processor breadboard hardware using recorded radar and NERDAS (NASA Earth Resources Data Annotation System) signals as the input data sources. The processor development program and the overall processor theory of operation and design are described. The real-time processor software system is documented and the results of the laboratory software tests, and recommendations for the efficient application of the data processing capabilities are presented.
NASA Astrophysics Data System (ADS)
Hart, D. M.; Merchant, B. J.; Abbott, R. E.
2012-12-01
The Component Evaluation project at Sandia National Laboratories supports the Ground-based Nuclear Explosion Monitoring program by performing testing and evaluation of the components that are used in seismic and infrasound monitoring systems. In order to perform this work, Component Evaluation maintains a testing facility called the FACT (Facility for Acceptance, Calibration, and Testing) site, a variety of test bed equipment, and a suite of software tools for analyzing test data. Recently, Component Evaluation has successfully integrated several improvements to its software analysis tools and test bed equipment that have substantially improved our ability to test and evaluate components. The software tool that is used to analyze test data is called TALENT: Test and AnaLysis EvaluatioN Tool. TALENT is designed to be a single, standard interface to all test configuration, metadata, parameters, waveforms, and results that are generated in the course of testing monitoring systems. It provides traceability by capturing everything about a test in a relational database that is required to reproduce the results of that test. TALENT provides a simple, yet powerful, user interface to quickly acquire, process, and analyze waveform test data. The software tool has also been expanded recently to handle sensors whose output is proportional to rotation angle, or rotation rate. As an example of this new processing capability, we show results from testing the new ATA ARS-16 rotational seismometer. The test data was collected at the USGS ASL. Four datasets were processed: 1) 1 Hz with increasing amplitude, 2) 4 Hz with increasing amplitude, 3) 16 Hz with increasing amplitude and 4) twenty-six discrete frequencies between 0.353 Hz to 64 Hz. The results are compared to manufacture-supplied data sheets.
USL/DBMS NASA/PC R and D project system testing standards
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Kavi, Srinu; Moreau, Dennis R.; Yan, Lin
1984-01-01
A set of system testing standards to be used in the development of all C software within the NASA/PC Research and Development Project is established. Testing will be considered in two phases: the program testing phase and the system testing phase. The objective of these standards is to provide guidelines for the planning and conduct of program and software system testing.
Application of Kingview and PLC in friction durability test system
NASA Astrophysics Data System (ADS)
Gao, Yinhan; Cui, Jing; Yang, Kaiyu; Ke, Hui; Song, Bing
2013-01-01
Using PLC and Kingview software, a friction durability test system is designed. The overall program, hardware configuration, software structure and monitoring interface are described in detail. PLC ensures the stability of data acquisition, and the KingView software makes the HMI easy to manipulate. The practical application shows that the proposed system is cheap, economical and highly reliable.
The environmental control and life support system advanced automation project
NASA Technical Reports Server (NTRS)
Dewberry, Brandon S.
1991-01-01
The objective of the ECLSS Advanced Automation project includes reduction of the risk associated with the integration of new, beneficial software techniques. Demonstrations of this software to baseline engineering and test personnel will show the benefits of these techniques. The advanced software will be integrated into ground testing and ground support facilities, familiarizing its usage by key personnel.
Small-scale fixed wing airplane software verification flight test
NASA Astrophysics Data System (ADS)
Miller, Natasha R.
The increased demand for micro Unmanned Air Vehicles (UAV) driven by military requirements, commercial use, and academia is creating a need for the ability to quickly and accurately conduct low Reynolds Number aircraft design. There exist several open source software programs that are free or inexpensive that can be used for large scale aircraft design, but few software programs target the realm of low Reynolds Number flight. XFLR5 is an open source, free to download, software program that attempts to take into consideration viscous effects that occur at low Reynolds Number in airfoil design, 3D wing design, and 3D airplane design. An off the shelf, remote control airplane was used as a test bed to model in XFLR5 and then compared to flight test collected data. Flight test focused on the stability modes of the 3D plane, specifically the phugoid mode. Design and execution of the flight tests were accomplished for the RC airplane using methodology from full scale military airplane test procedures. Results from flight test were not conclusive in determining the accuracy of the XFLR5 software program. There were several sources of uncertainty that did not allow for a full analysis of the flight test results. An off the shelf drone autopilot was used as a data collection device for flight testing. The precision and accuracy of the autopilot is unknown. Potential future work should investigate flight test methods for small scale UAV flight.
Proceedings of the 14th Annual Software Engineering Workshop
NASA Technical Reports Server (NTRS)
1989-01-01
Several software related topics are presented. Topics covered include studies and experiment at the Software Engineering Laboratory at the Goddard Space Flight Center, predicting project success from the Software Project Management Process, software environments, testing in a reuse environment, domain directed reuse, and classification tree analysis using the Amadeus measurement and empirical analysis.
Space Software Defined Radio Characterization to Enable Reuse
NASA Technical Reports Server (NTRS)
Mortensen, Dale J.; Bishop, Daniel W.; Chelmins, David
2012-01-01
NASA's Space Communication and Navigation Testbed is beginning operations on the International Space Station this year. The objective is to promote new software defined radio technologies and associated software application reuse, enabled by this first flight of NASA's Space Telecommunications Radio System architecture standard. The Space Station payload has three software defined radios onboard that allow for a wide variety of communications applications; however, each radio was only launched with one waveform application. By design the testbed allows new waveform applications to be uploaded and tested by experimenters in and outside of NASA. During the system integration phase of the testbed special waveform test modes and stand-alone test waveforms were used to characterize the SDR platforms for the future experiments. Characterization of the Testbed's JPL SDR using test waveforms and specialized ground test modes is discussed in this paper. One of the test waveforms, a record and playback application, can be utilized in a variety of ways, including new satellite on-orbit checkout as well as independent on-board testbed experiments.
NASA Technical Reports Server (NTRS)
Rosenberg, Linda H.; Arthur, James D.; Stapko, Ruth K.; Davani, Darush
1999-01-01
The Software Assurance Technology Center (SATC) at NASA Goddard Space Flight Center has been investigating how projects can determine when sufficient testing has been completed. For most projects, schedules are underestimated, and the last phase of the software development, testing, must be decreased. Two questions are frequently asked: "To what extent is the software error-free? " and "How much time and effort is required to detect and remove the remaining errors? " Clearly, neither question can be answered with absolute certainty. Nonetheless, the ability to answer these questions with some acceptable level of confidence is highly desirable. First, knowing the extent to which a product is error-free, we can judge when it is time to terminate testing. Secondly, if errors are judged to be present, we can perform a cost/benefit trade-off analysis to estimate when the software will be ready for use and at what cost. This paper explains the efforts of the SATC to help projects determine what is sufficient testing and when is the most cost-effective time to stop testing.
User systems guidelines for software projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abrahamson, L.
1986-04-01
This manual presents guidelines for software standards which were developed so that software project-development teams and management involved in approving the software could have a generalized view of all phases in the software production procedure and the steps involved in completing each phase. Guidelines are presented for six phases of software development: project definition, building a user interface, designing software, writing code, testing code, and preparing software documentation. The discussions for each phase include examples illustrating the recommended guidelines. 45 refs. (DWL)
2004-04-15
Ribbons is a program developed at UAB used worldwide to graphically depict complicated protein structures in a simplified format. The program uses sophisticated computer systems to understand the implications of protein structures. The Influenza virus remains a major causative agent for a large number of deaths among the elderly and young children and huge economic losses due to illness. Finding a cure will have a general impact both on the basic research of viral pathologists of fast evolving infectious agents and clinical treatment of influenza virus infection. The reproduction process of all strains of influenza are dependent on the same enzyme neuraminidase. Shown here is a segmented representation of the neuraminidase inhibitor compound sitting inside a cave-like contour of the neuraminidase enzyme surface. This cave-like formation present in every neuraminidase enzyme is the active site crucial to the flu's ability to infect. The space-grown crystals of neuraminidase have provided significant new details about the three-dimensional characteristics of this active site thus allowing researchers to design drugs that fit tighter into the site. Principal Investigator: Dr. Larry DeLucas
Housing, the Neighborhood Environment, and Physical Activity among Older African Americans
Hannon, Lonnie; Sawyer, Patricia; Allman, Richard M.
2013-01-01
This study examines the association of neighborhood environment, as measured by housing factors, with physical activity among older African Americans. Context is provided on the effects of structural inequality as an inhibitor of health enhancing neighborhood environments. The study population included African Americans participating in the UAB Study of Aging (n=433). Participants demonstrated the ability to walk during a baseline in-home assessment. The strength and independence of housing factors were assessed using neighborhood walking for exercise as the outcome variable. Sociodemographic data, co-morbid medical conditions, and rural/urban residence were included as independent control factors. Homeownership, occupancy, and length of residency maintained positive associations with neighborhood walking independent of control factors. Housing factors appear to be predictive of resident engagement in neighborhood walking. Housing factors, specifically high rates of homeownership, reflect functional and positive neighborhood environments conducive for physical activity. Future interventions seeking to promote health-enhancing behavior should focus on developing housing and built-environment assets within the neighborhood environment. PMID:23745172
Large Scale Software Building with CMake in ATLAS
NASA Astrophysics Data System (ADS)
Elmsheuser, J.; Krasznahorkay, A.; Obreshkov, E.; Undrus, A.; ATLAS Collaboration
2017-10-01
The offline software of the ATLAS experiment at the Large Hadron Collider (LHC) serves as the platform for detector data reconstruction, simulation and analysis. It is also used in the detector’s trigger system to select LHC collision events during data taking. The ATLAS offline software consists of several million lines of C++ and Python code organized in a modular design of more than 2000 specialized packages. Because of different workflows, many stable numbered releases are in parallel production use. To accommodate specific workflow requests, software patches with modified libraries are distributed on top of existing software releases on a daily basis. The different ATLAS software applications also require a flexible build system that strongly supports unit and integration tests. Within the last year this build system was migrated to CMake. A CMake configuration has been developed that allows one to easily set up and build the above mentioned software packages. This also makes it possible to develop and test new and modified packages on top of existing releases. The system also allows one to detect and execute partial rebuilds of the release based on single package changes. The build system makes use of CPack for building RPM packages out of the software releases, and CTest for running unit and integration tests. We report on the migration and integration of the ATLAS software to CMake and show working examples of this large scale project in production.
Basit, Mujeeb A; Baldwin, Krystal L; Kannan, Vaishnavi; Flahaven, Emily L; Parks, Cassandra J; Ott, Jason M; Willett, Duwayne L
2018-04-13
Moving to electronic health records (EHRs) confers substantial benefits but risks unintended consequences. Modern EHRs consist of complex software code with extensive local configurability options, which can introduce defects. Defects in clinical decision support (CDS) tools are surprisingly common. Feasible approaches to prevent and detect defects in EHR configuration, including CDS tools, are needed. In complex software systems, use of test-driven development and automated regression testing promotes reliability. Test-driven development encourages modular, testable design and expanding regression test coverage. Automated regression test suites improve software quality, providing a "safety net" for future software modifications. Each automated acceptance test serves multiple purposes, as requirements (prior to build), acceptance testing (on completion of build), regression testing (once live), and "living" design documentation. Rapid-cycle development or "agile" methods are being successfully applied to CDS development. The agile practice of automated test-driven development is not widely adopted, perhaps because most EHR software code is vendor-developed. However, key CDS advisory configuration design decisions and rules stored in the EHR may prove amenable to automated testing as "executable requirements." We aimed to establish feasibility of acceptance test-driven development of clinical decision support advisories in a commonly used EHR, using an open source automated acceptance testing framework (FitNesse). Acceptance tests were initially constructed as spreadsheet tables to facilitate clinical review. Each table specified one aspect of the CDS advisory's expected behavior. Table contents were then imported into a test suite in FitNesse, which queried the EHR database to automate testing. Tests and corresponding CDS configuration were migrated together from the development environment to production, with tests becoming part of the production regression test suite. We used test-driven development to construct a new CDS tool advising Emergency Department nurses to perform a swallowing assessment prior to administering oral medication to a patient with suspected stroke. Test tables specified desired behavior for (1) applicable clinical settings, (2) triggering action, (3) rule logic, (4) user interface, and (5) system actions in response to user input. Automated test suite results for the "executable requirements" are shown prior to building the CDS alert, during build, and after successful build. Automated acceptance test-driven development and continuous regression testing of CDS configuration in a commercial EHR proves feasible with open source software. Automated test-driven development offers one potential contribution to achieving high-reliability EHR configuration. Vetting acceptance tests with clinicians elicits their input on crucial configuration details early during initial CDS design and iteratively during rapid-cycle optimization. ©Mujeeb A Basit, Krystal L Baldwin, Vaishnavi Kannan, Emily L Flahaven, Cassandra J Parks, Jason M Ott, Duwayne L Willett. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 13.04.2018.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foucar, James G.; Salinger, Andrew G.; Deakin, Michael
CIME is the software infrastructure for configuring, building, running, and testing an Earth system model. It can be developed and tested as stand-alone software, but its main role is to be integrating into the CESM and ACME Earth system models.
Towards Test Driven Development for Computational Science with pFUnit
NASA Technical Reports Server (NTRS)
Rilee, Michael L.; Clune, Thomas L.
2014-01-01
Developers working in Computational Science & Engineering (CSE)/High Performance Computing (HPC) must contend with constant change due to advances in computing technology and science. Test Driven Development (TDD) is a methodology that mitigates software development risks due to change at the cost of adding comprehensive and continuous testing to the development process. Testing frameworks tailored for CSE/HPC, like pFUnit, can lower the barriers to such testing, yet CSE software faces unique constraints foreign to the broader software engineering community. Effective testing of numerical software requires a comprehensive suite of oracles, i.e., use cases with known answers, as well as robust estimates for the unavoidable numerical errors associated with implementation with finite-precision arithmetic. At first glance these concerns often seem exceedingly challenging or even insurmountable for real-world scientific applications. However, we argue that this common perception is incorrect and driven by (1) a conflation between model validation and software verification and (2) the general tendency in the scientific community to develop relatively coarse-grained, large procedures that compound numerous algorithmic steps.We believe TDD can be applied routinely to numerical software if developers pursue fine-grained implementations that permit testing, neatly side-stepping concerns about needing nontrivial oracles as well as the accumulation of errors. We present an example of a successful, complex legacy CSE/HPC code whose development process shares some aspects with TDD, which we contrast with current and potential capabilities. A mix of our proposed methodology and framework support should enable everyday use of TDD by CSE-expert developers.
Ground Systems Development Environment (GSDE) interface requirements analysis
NASA Technical Reports Server (NTRS)
Church, Victor E.; Philips, John; Hartenstein, Ray; Bassman, Mitchell; Ruskin, Leslie; Perez-Davila, Alfredo
1991-01-01
A set of procedural and functional requirements are presented for the interface between software development environments and software integration and test systems used for space station ground systems software. The requirements focus on the need for centralized configuration management of software as it is transitioned from development to formal, target based testing. This concludes the GSDE Interface Requirements study. A summary is presented of findings concerning the interface itself, possible interface and prototyping directions for further study, and results of the investigation of the Cronus distributed applications environment.
General-Purpose Electronic System Tests Aircraft
NASA Technical Reports Server (NTRS)
Glover, Richard D.
1989-01-01
Versatile digital equipment supports research, development, and maintenance. Extended aircraft interrogation and display system is general-purpose assembly of digital electronic equipment on ground for testing of digital electronic systems on advanced aircraft. Many advanced features, including multiple 16-bit microprocessors, pipeline data-flow architecture, advanced operating system, and resident software-development tools. Basic collection of software includes program for handling many types of data and for displays in various formats. User easily extends basic software library. Hardware and software interfaces to subsystems provided by user designed for flexibility in configuration to meet user's requirements.
Performance testing of 3D point cloud software
NASA Astrophysics Data System (ADS)
Varela-González, M.; González-Jorge, H.; Riveiro, B.; Arias, P.
2013-10-01
LiDAR systems are being used widely in recent years for many applications in the engineering field: civil engineering, cultural heritage, mining, industry and environmental engineering. One of the most important limitations of this technology is the large computational requirements involved in data processing, especially for large mobile LiDAR datasets. Several software solutions for data managing are available in the market, including open source suites, however, users often unknown methodologies to verify their performance properly. In this work a methodology for LiDAR software performance testing is presented and four different suites are studied: QT Modeler, VR Mesh, AutoCAD 3D Civil and the Point Cloud Library running in software developed at the University of Vigo (SITEGI). The software based on the Point Cloud Library shows better results in the loading time of the point clouds and CPU usage. However, it is not as strong as commercial suites in working set and commit size tests.
Man-rated flight software for the F-8 DFBW program
NASA Technical Reports Server (NTRS)
Bairnsfather, R. R.
1975-01-01
The design, implementation, and verification of the flight control software used in the F-8 DFBW program are discussed. Since the DFBW utilizes an Apollo computer and hardware, the procedures, controls, and basic management techniques employed are based on those developed for the Apollo software system. Program Assembly Control, simulator configuration control, erasable-memory load generation, change procedures and anomaly reporting are discussed. The primary verification tools--the all-digital simulator, the hybrid simulator, and the Iron Bird simulator--are described, as well as the program test plans and their implementation on the various simulators. Failure-effects analysis and the creation of special failure-generating software for testing purposes are described. The quality of the end product is evidenced by the F-8 DFBW flight test program in which 42 flights, totaling 58 hours of flight time, were successfully made without any DFCS inflight software, or hardware, failures.
Four applications of a software data collection and analysis methodology
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Selby, Richard W., Jr.
1985-01-01
The evaluation of software technologies suffers because of the lack of quantitative assessment of their effect on software development and modification. A seven-step data collection and analysis methodology couples software technology evaluation with software measurement. Four in-depth applications of the methodology are presented. The four studies represent each of the general categories of analyses on the software product and development process: blocked subject-project studies, replicated project studies, multi-project variation studies, and single project strategies. The four applications are in the areas of, respectively, software testing, cleanroom software development, characteristic software metric sets, and software error analysis.
A Heuristic for Improving Legacy Software Quality during Maintenance: An Empirical Case Study
ERIC Educational Resources Information Center
Sale, Michael John
2017-01-01
Many organizations depend on the functionality of mission-critical legacy software and the continued maintenance of this software is vital. Legacy software is defined here as software that contains no testing suite, is often foreign to the developer performing the maintenance, lacks meaningful documentation, and over time, has become difficult to…
ERIC Educational Resources Information Center
Kramer, Aleksey
2013-01-01
The topic of software security has become paramount in information technology (IT) related scholarly research. Researchers have addressed numerous software security topics touching on all phases of the Software Development Life Cycle (SDLC): requirements gathering phase, design phase, development phase, testing phase, and maintenance phase.…
A software framework for developing measurement applications under variable requirements.
Arpaia, Pasquale; Buzio, Marco; Fiscarelli, Lucio; Inglese, Vitaliano
2012-11-01
A framework for easily developing software for measurement and test applications under highly and fast-varying requirements is proposed. The framework allows the software quality, in terms of flexibility, usability, and maintainability, to be maximized. Furthermore, the development effort is reduced and finalized, by relieving the test engineer of development details. The framework can be configured for satisfying a large set of measurement applications in a generic field for an industrial test division, a test laboratory, or a research center. As an experimental case study, the design, the implementation, and the assessment inside the application to a measurement scenario of magnet testing at the European Organization for Nuclear Research is reported.
[Development of ophthalmologic software for handheld devices].
Grottone, Gustavo Teixeira; Pisa, Ivan Torres; Grottone, João Carlos; Debs, Fernando; Schor, Paulo
2006-01-01
The formulas for calculation of intraocular lenses have evolved since the first theoretical formulas by Fyodorov. Among the second generation formulas, the SRK-I formula has a simple calculation, taking into account a calculation that only involved anteroposterior length, IOL constant and average keratometry. With the evolution of those formulas, complexicity increased making the reconfiguration of parameters in special situations impracticable. In this way the production and development of software for such a purpose, can help surgeons to recalculate those values if needed. To idealize, develop and test a Brazilian software for calculation of IOL dioptric power for handheld computers. For the development and programming of software for calculation of IOL, we used PocketC program (OrbWorks Concentrated Software, USA). We compared the results collected from a gold-standard device (Ultrascan/Alcon Labs) with the simulation of 100 fictitious patients, using the same IOL parameters. The results were grouped for ULTRASCAN data and SOFTWARE data. Using SRK/T formula the range of those parameters included a keratometry varying between 35 and 55D, axial length between 20 and 28 mm, IOL constants of 118.7, 118.3 and 115.8. Using Wilcoxon test, it was shown that the groups do not differ (p=0.314). We had a variation in the Ultrascan sample between 11.82 and 27.97. In the tested program sample the variation was practically similar (11.83-27.98). The average of the Ultrascan group was 20.93. The software group had a similar average. The standard deviation of the samples was also similar (4.53). The precision of IOL software for handheld devices was similar to that of the standard devices using the SRK/T formula. The software worked properly, was steady without bugs in tested models of operational system.
Holló, Gábor; Shu-Wei, Hsu; Naghizadeh, Farzaneh
2016-06-01
To compare the current (6.3) and a novel software version (6.12) of the RTVue-100 optical coherence tomograph (RTVue-OCT) for ganglion cell complex (GCC) and retinal nerve fiber layer thickness (RNFLT) image segmentation and detection of glaucoma in high myopia. RNFLT and GCC scans were acquired with software version 6.3 of the RTVue-OCT on 51 highly myopic eyes (spherical refractive error ≤-6.0 D) of 51 patients, and were analyzed with both the software versions. Twenty-two eyes were nonglaucomatous, 13 were ocular hypertensive and 16 eyes had glaucoma. No difference was seen for any RNFLT, and average GCC parameter between the software versions (paired t test, P≥0.084). Global loss volume was significantly lower (more normal) with version 6.12 than with version 6.3 (Wilcoxon signed-rank test, P<0.001). The percentage agreement (κ) between the clinical (normal and ocular hypertensive vs. glaucoma) and the software-provided classifications (normal and borderline vs. outside normal limits) were 0.3219 and 0.4442 for average RNFLT, and 0.2926 and 0.4977 for average GCC with versions 1 and 2, respectively (McNemar symmetry test, P≥0.289). No difference in average RNFLT and GCC classification (McNemar symmetry test, P≥0.727) and the number of eyes with at least 1 segmentation error (P≥0.109) was found between the software versions, respectively. Although GCC segmentation was improved with software version 6.12 compared with the current version in highly myopic eyes, this did not result in a significant change of the average RNFLT and GCC values, and did not significantly improve the software-provided classification for glaucoma.
State-of-the-art software for window energy-efficiency rating and labeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arasteh, D.; Finlayson, E.; Huang, J.
1998-07-01
Measuring the thermal performance of windows in typical residential buildings is an expensive proposition. Not only is laboratory testing expensive, but each window manufacturer typically offers hundreds of individual products, each of which has different thermal performance properties. With over a thousand window manufacturers nationally, a testing-based rating system would be prohibitively expensive to the industry and to consumers. Beginning in the early 1990s, simulation software began to be used as part of a national program for rating window U-values. The rating program has since been expanded to include Solar Hear Gain Coefficients and is now being extended to annualmore » energy performance. This paper describes four software packages available to the public from Lawrence Berkeley National Laboratory (LBNL). These software packages are used to evaluate window thermal performance: RESFEN (for evaluating annual energy costs), WINDOW (for calculating a product`s thermal performance properties), THERM (a preprocessor for WINDOW that determines two-dimensional heat-transfer effects), and Optics (a preprocessor for WINDOW`s glass database). Software not only offers a less expensive means than testing to evaluate window performance, it can also be used during the design process to help manufacturers produce windows that will meet target specifications. In addition, software can show small improvements in window performance that might not be detected in actual testing because of large uncertainties in test procedures.« less
Model-Based Verification and Validation of Spacecraft Avionics
NASA Technical Reports Server (NTRS)
Khan, M. Omair; Sievers, Michael; Standley, Shaun
2012-01-01
Verification and Validation (V&V) at JPL is traditionally performed on flight or flight-like hardware running flight software. For some time, the complexity of avionics has increased exponentially while the time allocated for system integration and associated V&V testing has remained fixed. There is an increasing need to perform comprehensive system level V&V using modeling and simulation, and to use scarce hardware testing time to validate models; the norm for thermal and structural V&V for some time. Our approach extends model-based V&V to electronics and software through functional and structural models implemented in SysML. We develop component models of electronics and software that are validated by comparison with test results from actual equipment. The models are then simulated enabling a more complete set of test cases than possible on flight hardware. SysML simulations provide access and control of internal nodes that may not be available in physical systems. This is particularly helpful in testing fault protection behaviors when injecting faults is either not possible or potentially damaging to the hardware. We can also model both hardware and software behaviors in SysML, which allows us to simulate hardware and software interactions. With an integrated model and simulation capability we can evaluate the hardware and software interactions and identify problems sooner. The primary missing piece is validating SysML model correctness against hardware; this experiment demonstrated such an approach is possible.
Application of industry-standard guidelines for the validation of avionics software
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.; Shagnea, Anita M.
1990-01-01
The application of industry standards to the development of avionics software is discussed, focusing on verification and validation activities. It is pointed out that the procedures that guide the avionics software development and testing process are under increased scrutiny. The DO-178A guidelines, Software Considerations in Airborne Systems and Equipment Certification, are used by the FAA for certifying avionics software. To investigate the effectiveness of the DO-178A guidelines for improving the quality of avionics software, guidance and control software (GCS) is being developed according to the DO-178A development method. It is noted that, due to the extent of the data collection and configuration management procedures, any phase in the life cycle of a GCS implementation can be reconstructed. Hence, a fundamental development and testing platform has been established that is suitable for investigating the adequacy of various software development processes. In particular, the overall effectiveness and efficiency of the development method recommended by the DO-178A guidelines are being closely examined.
Development strategies for the satellite flight software on-board Meteosat Third Generation
NASA Astrophysics Data System (ADS)
Tipaldi, Massimo; Legendre, Cedric; Koopmann, Olliver; Ferraguto, Massimo; Wenker, Ralf; D'Angelo, Gianni
2018-04-01
Nowadays, satellites are becoming increasingly software dependent. Satellite Flight Software (FSW), that is to say, the application software running on the satellite main On-Board Computer (OBC), plays a relevant role in implementing complex space mission requirements. In this paper, we examine relevant technical approaches and programmatic strategies adopted for the development of the Meteosat Third Generation Satellite (MTG) FSW. To begin with, we present its layered model-based architecture, and the means for ensuring a robust and reliable interaction among the FSW components. Then, we focus on the selection of an effective software development life cycle model. In particular, by combining plan-driven and agile approaches, we can fulfill the need of having preliminary SW versions. They can be used for the elicitation of complex system-level requirements as well as for the initial satellite integration and testing activities. Another important aspect can be identified in the testing activities. Indeed, very demanding quality requirements have to be fulfilled in satellite SW applications. This manuscript proposes a test automation framework, which uses an XML-based test procedure language independent of the underlying test environment. Finally, a short overview of the MTG FSW sizing and timing budgets concludes the paper.
Using Knowledge Management to Revise Software-Testing Processes
ERIC Educational Resources Information Center
Nogeste, Kersti; Walker, Derek H. T.
2006-01-01
Purpose: This paper aims to use a knowledge management (KM) approach to effectively revise a utility retailer's software testing process. This paper presents a case study of how the utility organisation's customer services IT production support group improved their test planning skills through applying the American Productivity and Quality Center…
TEST (Toxicity Estimation Software Tool) Ver 4.1
The Toxicity Estimation Software Tool (T.E.S.T.) has been developed to allow users to easily estimate toxicity and physical properties using a variety of QSAR methodologies. T.E.S.T allows a user to estimate toxicity without requiring any external programs. Users can input a chem...
Rover Attitude and Pointing System Simulation Testbed
NASA Technical Reports Server (NTRS)
Vanelli, Charles A.; Grinblat, Jonathan F.; Sirlin, Samuel W.; Pfister, Sam
2009-01-01
The MER (Mars Exploration Rover) Attitude and Pointing System Simulation Testbed Environment (RAPSSTER) provides a simulation platform used for the development and test of GNC (guidance, navigation, and control) flight algorithm designs for the Mars rovers, which was specifically tailored to the MERs, but has since been used in the development of rover algorithms for the Mars Science Laboratory (MSL) as well. The software provides an integrated simulation and software testbed environment for the development of Mars rover attitude and pointing flight software. It provides an environment that is able to run the MER GNC flight software directly (as opposed to running an algorithmic model of the MER GNC flight code). This improves simulation fidelity and confidence in the results. Further more, the simulation environment allows the user to single step through its execution, pausing, and restarting at will. The system also provides for the introduction of simulated faults specific to Mars rover environments that cannot be replicated in other testbed platforms, to stress test the GNC flight algorithms under examination. The software provides facilities to do these stress tests in ways that cannot be done in the real-time flight system testbeds, such as time-jumping (both forwards and backwards), and introduction of simulated actuator faults that would be difficult, expensive, and/or destructive to implement in the real-time testbeds. Actual flight-quality codes can be incorporated back into the development-test suite of GNC developers, closing the loop between the GNC developers and the flight software developers. The software provides fully automated scripting, allowing multiple tests to be run with varying parameters, without human supervision.
The Toxicity Estimation Software Tool (T.E.S.T.)
The Toxicity Estimation Software Tool (T.E.S.T.) has been developed to estimate toxicological values for aquatic and mammalian species considering acute and chronic endpoints for screening purposes within TSCA and REACH programs.
Survey of Verification and Validation Techniques for Small Satellite Software Development
NASA Technical Reports Server (NTRS)
Jacklin, Stephen A.
2015-01-01
The purpose of this paper is to provide an overview of the current trends and practices in small-satellite software verification and validation. This document is not intended to promote a specific software assurance method. Rather, it seeks to present an unbiased survey of software assurance methods used to verify and validate small satellite software and to make mention of the benefits and value of each approach. These methods include simulation and testing, verification and validation with model-based design, formal methods, and fault-tolerant software design with run-time monitoring. Although the literature reveals that simulation and testing has by far the longest legacy, model-based design methods are proving to be useful for software verification and validation. Some work in formal methods, though not widely used for any satellites, may offer new ways to improve small satellite software verification and validation. These methods need to be further advanced to deal with the state explosion problem and to make them more usable by small-satellite software engineers to be regularly applied to software verification. Last, it is explained how run-time monitoring, combined with fault-tolerant software design methods, provides an important means to detect and correct software errors that escape the verification process or those errors that are produced after launch through the effects of ionizing radiation.
Study on Spacelab software development and integration concepts
NASA Technical Reports Server (NTRS)
1974-01-01
A study was conducted to define the complexity and magnitude of the Spacelab software challenge. The study was based on current Spacelab program concepts, anticipated flight schedules, and ground operation plans. The study was primarily directed toward identifying and solving problems related to the experiment flight application and tests and checkout software executing in the Spacelab onboard command and data management subsystem (CDMS) computers and electrical ground support equipment (EGSE). The study provides a conceptual base from which it is possible to proceed into the development phase of the Software Test and Integration Laboratory (STIL) and establishes guidelines for the definition of standards which will ensure that the total Spacelab software is understood prior to entering development.
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.; Kacpura, Thomas J.; Johnson, Sandra K.; Lux, James P.
2010-01-01
NASA is developing an experimental flight payload (referred to as the Space Communication and Navigation (SCAN) Test Bed) to investigate software defined radio (SDR), networking, and navigation technologies, operationally in the space environment. The payload consists of three software defined radios each compliant to NASA s Space Telecommunications Radio System Architecture, a common software interface description standard for software defined radios. The software defined radios are new technology developments underway by NASA and industry partners. Planned for launch in early 2012, the payload will be externally mounted to the International Space Station truss and conduct experiments representative of future mission capability.
2017-03-17
NASA engineers and test directors gather in Firing Room 3 in the Launch Control Center at NASA's Kennedy Space Center in Florida, to watch a demonstration of the automated command and control software for the agency's Space Launch System (SLS) and Orion spacecraft. The software is called the Ground Launch Sequencer. It will be responsible for nearly all of the launch commit criteria during the final phases of launch countdowns. The Ground and Flight Application Software Team (GFAST) demonstrated the software. It was developed by the Command, Control and Communications team in the Ground Systems Development and Operations (GSDO) Program. GSDO is helping to prepare the center for the first test flight of Orion atop the SLS on Exploration Mission 1.
Automation software for a materials testing laboratory
NASA Technical Reports Server (NTRS)
Mcgaw, Michael A.; Bonacuse, Peter J.
1990-01-01
The software environment in use at the NASA-Lewis Research Center's High Temperature Fatigue and Structures Laboratory is reviewed. This software environment is aimed at supporting the tasks involved in performing materials behavior research. The features and capabilities of the approach to specifying a materials test include static and dynamic control mode switching, enabling multimode test control; dynamic alteration of the control waveform based upon events occurring in the response variables; precise control over the nature of both command waveform generation and data acquisition; and the nesting of waveform/data acquisition strategies so that material history dependencies may be explored. To eliminate repetitive tasks in the coventional research process, a communications network software system is established which provides file interchange and remote console capabilities.
Creating and Testing Simulation Software
NASA Technical Reports Server (NTRS)
Heinich, Christina M.
2013-01-01
The goal of this project is to learn about the software development process, specifically the process to test and fix components of the software. The paper will cover the techniques of testing code, and the benefits of using one style of testing over another. It will also discuss the overall software design and development lifecycle, and how code testing plays an integral role in it. Coding is notorious for always needing to be debugged due to coding errors or faulty program design. Writing tests either before or during program creation that cover all aspects of the code provide a relatively easy way to locate and fix errors, which will in turn decrease the necessity to fix a program after it is released for common use. The backdrop for this paper is the Spaceport Command and Control System (SCCS) Simulation Computer Software Configuration Item (CSCI), a project whose goal is to simulate a launch using simulated models of the ground systems and the connections between them and the control room. The simulations will be used for training and to ensure that all possible outcomes and complications are prepared for before the actual launch day. The code being tested is the Programmable Logic Controller Interface (PLCIF) code, the component responsible for transferring the information from the models to the model Programmable Logic Controllers (PLCs), basic computers that are used for very simple tasks.
SigmaPlot 2000, Version 6.00, SPSS Inc. Computer Software Test Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
HURLBUT, S.T.
2000-10-24
SigmaPlot is a vendor software product used in conjunction with the supercritical fluid extraction Fourier transform infrared spectrometer (SFE-FTIR) system. This product converts the raw spectral data to useful area numbers. SigmaPlot will be used in conjunction with procedure ZA-565-301, ''Determination of Moisture by Supercritical Fluid Extraction and Infrared Detection.'' This test plan will be performed in conjunction with or prior to HNF-6936, ''HA-53 Supercritical Fluid Extraction System Acceptance Test Plan'', to perform analyses for water. The test will ensure that the software can be installed properly and will manipulate the analytical data correctly.
Earth Observing System (EOS)/Advanced Microwave Sounding Unit-A (AMSU-A) software assurance plan
NASA Technical Reports Server (NTRS)
Schwantje, Robert; Smith, Claude
1994-01-01
This document defines the responsibilities of Software Quality Assurance (SOA) for the development of the flight software installed in EOS/AMSU-A instruments, and the ground support software used in the test and integration of the EOS/AMSU-A instruments.
A Structure for Creating Quality Software.
ERIC Educational Resources Information Center
Christensen, Larry C.; Bodey, Michael R.
1990-01-01
Addresses the issue of assuring quality software for use in computer-aided instruction and presents a structure by which developers can create quality courseware. Differences between courseware and computer-aided instruction software are discussed, methods for testing software are described, and human factors issues as well as instructional design…
Development and validation of techniques for improving software dependability
NASA Technical Reports Server (NTRS)
Knight, John C.
1992-01-01
A collection of document abstracts are presented on the topic of improving software dependability through NASA grant NAG-1-1123. Specific topics include: modeling of error detection; software inspection; test cases; Magnetic Stereotaxis System safety specifications and fault trees; and injection of synthetic faults into software.
Research on Generating Method of Embedded Software Test Document Based on Dynamic Model
NASA Astrophysics Data System (ADS)
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying
2018-03-01
This paper provides a dynamic model-based test document generation method for embedded software that provides automatic generation of two documents: test requirements specification documentation and configuration item test documentation. This method enables dynamic test requirements to be implemented in dynamic models, enabling dynamic test demand tracking to be easily generated; able to automatically generate standardized, standardized test requirements and test documentation, improved document-related content inconsistency and lack of integrity And other issues, improve the efficiency.
Software Testing and Verification in Climate Model Development
NASA Technical Reports Server (NTRS)
Clune, Thomas L.; Rood, RIchard B.
2011-01-01
Over the past 30 years most climate models have grown from relatively simple representations of a few atmospheric processes to a complex multi-disciplinary system. Computer infrastructure over that period has gone from punch card mainframes to modem parallel clusters. Model implementations have become complex, brittle, and increasingly difficult to extend and maintain. Existing verification processes for model implementations rely almost exclusively upon some combination of detailed analysis of output from full climate simulations and system-level regression tests. In additional to being quite costly in terms of developer time and computing resources, these testing methodologies are limited in terms of the types of defects that can be detected, isolated and diagnosed. Mitigating these weaknesses of coarse-grained testing with finer-grained "unit" tests has been perceived as cumbersome and counter-productive. In the commercial software sector, recent advances in tools and methodology have led to a renaissance for systematic fine-grained testing. We discuss the availability of analogous tools for scientific software and examine benefits that similar testing methodologies could bring to climate modeling software. We describe the unique challenges faced when testing complex numerical algorithms and suggest techniques to minimize and/or eliminate the difficulties.
Toxicity Estimation Software Tool (TEST)
The Toxicity Estimation Software Tool (TEST) was developed to allow users to easily estimate the toxicity of chemicals using Quantitative Structure Activity Relationships (QSARs) methodologies. QSARs are mathematical models used to predict measures of toxicity from the physical c...
A Model for Assessing the Liability of Seemingly Correct Software
NASA Technical Reports Server (NTRS)
Voas, Jeffrey M.; Voas, Larry K.; Miller, Keith W.
1991-01-01
Current research on software reliability does not lend itself to quantitatively assessing the risk posed by a piece of life-critical software. Black-box software reliability models are too general and make too many assumptions to be applied confidently to assessing the risk of life-critical software. We present a model for assessing the risk caused by a piece of software; this model combines software testing results and Hamlet's probable correctness model. We show how this model can assess software risk for those who insure against a loss that can occur if life-critical software fails.
A tool to include gamma analysis software into a quality assurance program.
Agnew, Christina E; McGarry, Conor K
2016-03-01
To provide a tool to enable gamma analysis software algorithms to be included in a quality assurance (QA) program. Four image sets were created comprising two geometric images to independently test the distance to agreement (DTA) and dose difference (DD) elements of the gamma algorithm, a clinical step and shoot IMRT field and a clinical VMAT arc. The images were analysed using global and local gamma analysis with 2 in-house and 8 commercially available software encompassing 15 software versions. The effect of image resolution on gamma pass rates was also investigated. All but one software accurately calculated the gamma passing rate for the geometric images. Variation in global gamma passing rates of 1% at 3%/3mm and over 2% at 1%/1mm was measured between software and software versions with analysis of appropriately sampled images. This study provides a suite of test images and the gamma pass rates achieved for a selection of commercially available software. This image suite will enable validation of gamma analysis software within a QA program and provide a frame of reference by which to compare results reported in the literature from various manufacturers and software versions. Copyright © 2015. Published by Elsevier Ireland Ltd.
Autonomous Real Time Requirements Tracing
NASA Technical Reports Server (NTRS)
Plattsmier, George I.; Stetson, Howard K.
2014-01-01
One of the more challenging aspects of software development is the ability to verify and validate the functional software requirements dictated by the Software Requirements Specification (SRS) and the Software Detail Design (SDD). Insuring the software has achieved the intended requirements is the responsibility of the Software Quality team and the Software Test team. The utilization of Timeliner-TLX(sup TM) Auto-Procedures for relocating ground operations positions to ISS automated on-board operations has begun the transition that would be required for manned deep space missions with minimal crew requirements. This transition also moves the auto-procedures from the procedure realm into the flight software arena and as such the operational requirements and testing will be more structured and rigorous. The autoprocedures would be required to meet NASA software standards as specified in the Software Safety Standard (NASASTD- 8719), the Software Engineering Requirements (NPR 7150), the Software Assurance Standard (NASA-STD-8739) and also the Human Rating Requirements (NPR-8705). The Autonomous Fluid Transfer System (AFTS) test-bed utilizes the Timeliner-TLX(sup TM) Language for development of autonomous command and control software. The Timeliner- TLX(sup TM) system has the unique feature of providing the current line of the statement in execution during real-time execution of the software. The feature of execution line number internal reporting unlocks the capability of monitoring the execution autonomously by use of a companion Timeliner-TLX(sup TM) sequence as the line number reporting is embedded inside the Timeliner-TLX(sup TM) execution engine. This negates I/O processing of this type data as the line number status of executing sequences is built-in as a function reference. This paper will outline the design and capabilities of the AFTS Autonomous Requirements Tracker, which traces and logs SRS requirements as they are being met during real-time execution of the targeted system. It is envisioned that real time requirements tracing will greatly assist the movement of autoprocedures to flight software enhancing the software assurance of auto-procedures and also their acceptance as reliable commanders
Autonomous Real Time Requirements Tracing
NASA Technical Reports Server (NTRS)
Plattsmier, George; Stetson, Howard
2014-01-01
One of the more challenging aspects of software development is the ability to verify and validate the functional software requirements dictated by the Software Requirements Specification (SRS) and the Software Detail Design (SDD). Insuring the software has achieved the intended requirements is the responsibility of the Software Quality team and the Software Test team. The utilization of Timeliner-TLX(sup TM) Auto- Procedures for relocating ground operations positions to ISS automated on-board operations has begun the transition that would be required for manned deep space missions with minimal crew requirements. This transition also moves the auto-procedures from the procedure realm into the flight software arena and as such the operational requirements and testing will be more structured and rigorous. The autoprocedures would be required to meet NASA software standards as specified in the Software Safety Standard (NASASTD- 8719), the Software Engineering Requirements (NPR 7150), the Software Assurance Standard (NASA-STD-8739) and also the Human Rating Requirements (NPR-8705). The Autonomous Fluid Transfer System (AFTS) test-bed utilizes the Timeliner-TLX(sup TM) Language for development of autonomous command and control software. The Timeliner-TLX(sup TM) system has the unique feature of providing the current line of the statement in execution during real-time execution of the software. The feature of execution line number internal reporting unlocks the capability of monitoring the execution autonomously by use of a companion Timeliner-TLX(sup TM) sequence as the line number reporting is embedded inside the Timeliner-TLX(sup TM) execution engine. This negates I/O processing of this type data as the line number status of executing sequences is built-in as a function reference. This paper will outline the design and capabilities of the AFTS Autonomous Requirements Tracker, which traces and logs SRS requirements as they are being met during real-time execution of the targeted system. It is envisioned that real time requirements tracing will greatly assist the movement of autoprocedures to flight software enhancing the software assurance of auto-procedures and also their acceptance as reliable commanders.
Western aeronautical test range real-time graphics software package MAGIC
NASA Technical Reports Server (NTRS)
Malone, Jacqueline C.; Moore, Archie L.
1988-01-01
The master graphics interactive console (MAGIC) software package used on the Western Aeronautical Test Range (WATR) of the NASA Ames Research Center is described. MAGIC is a resident real-time research tool available to flight researchers-scientists in the NASA mission control centers of the WATR at the Dryden Flight Research Facility at Edwards, California. The hardware configuration and capabilities of the real-time software package are also discussed.
Software component quality evaluation
NASA Technical Reports Server (NTRS)
Clough, A. J.
1991-01-01
The paper describes a software inspection process that can be used to evaluate the quality of software components. Quality criteria, process application, independent testing of the process and proposed associated tool support are covered. Early results indicate that this technique is well suited for assessing software component quality in a standardized fashion. With automated machine assistance to facilitate both the evaluation and selection of software components, such a technique should promote effective reuse of software components.
Standard practices for the implementation of computer software
NASA Technical Reports Server (NTRS)
Irvine, A. P. (Editor)
1978-01-01
A standard approach to the development of computer program is provided that covers the file cycle of software development from the planning and requirements phase through the software acceptance testing phase. All documents necessary to provide the required visibility into the software life cycle process are discussed in detail.
Proceedings, Conference on the Computing Environment for Mathematical Software
NASA Technical Reports Server (NTRS)
1981-01-01
Recent advances in software and hardware technology which make it economical to create computing environments appropriate for specialized applications are addressed. Topics included software tools, FORTRAN standards activity, and features of languages, operating systems, and hardware that are important for the development, testing, and maintenance of mathematical software.
A methodology for producing reliable software, volume 1
NASA Technical Reports Server (NTRS)
Stucki, L. G.; Moranda, P. B.; Foshee, G.; Kirchoff, M.; Omre, R.
1976-01-01
An investigation into the areas having an impact on producing reliable software including automated verification tools, software modeling, testing techniques, structured programming, and management techniques is presented. This final report contains the results of this investigation, analysis of each technique, and the definition of a methodology for producing reliable software.
AEDT Software Requirements Documents - Draft
DOT National Transportation Integrated Search
2007-01-25
This software requirements document serves as the basis for designing and testing the Aviation Environmental Design Tool (AEDT) software. The intended audience for this document consists of the following groups: the AEDT designers, developers, and te...
Software design for automated assembly of truss structures
NASA Technical Reports Server (NTRS)
Herstrom, Catherine L.; Grantham, Carolyn; Allen, Cheryl L.; Doggett, William R.; Will, Ralph W.
1992-01-01
Concern over the limited intravehicular activity time has increased the interest in performing in-space assembly and construction operations with automated robotic systems. A technique being considered at LaRC is a supervised-autonomy approach, which can be monitored by an Earth-based supervisor that intervenes only when the automated system encounters a problem. A test-bed to support evaluation of the hardware and software requirements for supervised-autonomy assembly methods was developed. This report describes the design of the software system necessary to support the assembly process. The software is hierarchical and supports both automated assembly operations and supervisor error-recovery procedures, including the capability to pause and reverse any operation. The software design serves as a model for the development of software for more sophisticated automated systems and as a test-bed for evaluation of new concepts and hardware components.
Implementation and Testing of VLBI Software Correlation at the USNO
NASA Technical Reports Server (NTRS)
Fey, Alan; Ojha, Roopesh; Boboltz, Dave; Geiger, Nicole; Kingham, Kerry; Hall, David; Gaume, Ralph; Johnston, Ken
2010-01-01
The Washington Correlator (WACO) at the U.S. Naval Observatory (USNO) is a dedicated VLBI processor based on dedicated hardware of ASIC design. The WACO is currently over 10 years old and is nearing the end of its expected lifetime. Plans for implementation and testing of software correlation at the USNO are currently being considered. The VLBI correlation process is, by its very nature, well suited to a parallelized computing environment. Commercial off-the-shelf computer hardware has advanced in processing power to the point where software correlation is now both economically and technologically feasible. The advantages of software correlation are manifold but include flexibility, scalability, and easy adaptability to changing environments and requirements. We discuss our experience with and plans for use of software correlation at USNO with emphasis on the use of the DiFX software correlator.
Shade matching assisted by digital photography and computer software.
Schropp, Lars
2009-04-01
To evaluate the efficacy of digital photographs and graphic computer software for color matching compared to conventional visual matching. The shade of a tab from a shade guide (Vita 3D-Master Guide) placed in a phantom head was matched to a second guide of the same type by nine observers. This was done for twelve selected shade tabs (tests). The shade-matching procedure was performed visually in a simulated clinic environment and with digital photographs, and the time spent for both procedures was recorded. An alternative arrangement of the shade tabs was used in the digital photographs. In addition, a graphic software program was used for color analysis. Hue, chroma, and lightness values of the test tab and all tabs of the second guide were derived from the digital photographs. According to the CIE L*C*h* color system, the color differences between the test tab and tabs of the second guide were calculated. The shade guide tab that deviated least from the test tab was determined to be the match. Shade matching performance by means of graphic software was compared with the two visual methods and tested by Chi-square tests (alpha= 0.05). Eight of twelve test tabs (67%) were matched correctly by the computer software method. This was significantly better (p < 0.02) than the performance of the visual shade matching methods conducted in the simulated clinic (32% correct match) and with photographs (28% correct match). No correlation between time consumption for the visual shade matching methods and frequency of correct match was observed. Shade matching assisted by digital photographs and computer software was significantly more reliable than by conventional visual methods.
Software and System Warranty Issues and Generic Warranty Clause.
1987-06-01
communications networks and other government-furnished equipment. Special attention . must also be paid to software packages, such as operating...34 :- ’.-’".,:., ",’., . .’.’ . ’ -.’ -. ., .- . 0;/ .’.. ,; .’ ’.’...’. • . .. . * Phose A - Devekpment Test and Evaluation conducted at a test facility. * Phaie - Devopment Test and
Development of a Unix/VME data acquisition system
NASA Astrophysics Data System (ADS)
Miller, M. C.; Ahern, S.; Clark, S. M.
1992-01-01
The current status of a Unix-based VME data acquisition development project is described. It is planned to use existing Fortran data collection software to drive the existing CAMAC electronics via a VME CAMAC branch driver card and associated Daresbury Unix driving software. The first usable Unix driver has been written and produces single-action CAMAC cycles from test software. The data acquisition code has been implemented in test mode under Unix with few problems and effort is now being directed toward finalizing calls to the CAMAC-driving software and ultimate evaluation of the complete system.
NASA Technical Reports Server (NTRS)
Roche, Rigoberto; Shalkhauser, Mary Jo Windmille
2017-01-01
The Integrated Power, Avionics and Software (IPAS) software defined radio (SDR) was implemented on the Reconfigurable, Intelligently-Adaptive Communication System (RAICS) platform, for radio development at NASA Johnson Space Center. Software and hardware description language (HDL) code were delivered by NASA Glenn Research Center for use in the IPAS test bed and for development of their own Space Telecommunications Radio System (STRS) waveforms on the RAICS platform. The purpose of this document is to describe how to setup and operate the IPAS STRS Radio platform with its delivered test waveform.
Man-rated flight software for the F-8 DFBW program
NASA Technical Reports Server (NTRS)
Bairnsfather, R. R.
1976-01-01
The design, implementation, and verification of the flight control software used in the F-8 DFBW program are discussed. Since the DFBW utilizes an Apollo computer and hardware, the procedures, controls, and basic management techniques employed are based on those developed for the Apollo software system. Program assembly control, simulator configuration control, erasable-memory load generation, change procedures and anomaly reporting are discussed. The primary verification tools are described, as well as the program test plans and their implementation on the various simulators. Failure effects analysis and the creation of special failure generating software for testing purposes are described.
Molinuevo, Beatriz; Torrubia, Rafael
2011-04-01
The relevance of healthcare student training in communication skills has led to the development of instruments for measuring attitudes towards learning communication skills. One such instrument is the Communication Skills Attitude Scale (CSAS), developed in English speaking students and adapted to different languages and cultures. No data is available on the performance of CSAS with South European students. The aims of the present study were to translate the CSAS into the Catalan language and study its psychometric properties in South European healthcare students. A total of 569 students from the School of Medicine of the Universitat Autònoma de Barcelona (UAB) participated. Students completed a Catalan version of the CSAS and provided demographic and education information. Principal component analysis with oblimin rotation supported a two-factor original structure with some modifications. In general, internal consistency and test-retest reliability of the scales were satisfactory, especially for the factor measuring positive attitudes. Relationships of student responses on the two factors with demographic and education variables were consistent with previous work. Students with higher positive attitudes tended to be female, to be foreign students and to think that their communication skills needed improving. Students with higher negative attitudes tended to be male and to have parents that were doctors or nurses. These data support the internal validity of a Catalan version of the CSAS and support its use in future research and educational studies related to attitudes towards learning communication skills for South European students who speak Catalan.
(Quickly) Testing the Tester via Path Coverage
NASA Technical Reports Server (NTRS)
Groce, Alex
2009-01-01
The configuration complexity and code size of an automated testing framework may grow to a point that the tester itself becomes a significant software artifact, prone to poor configuration and implementation errors. Unfortunately, testing the tester by using old versions of the software under test (SUT) may be impractical or impossible: test framework changes may have been motivated by interface changes in the tested system, or fault detection may become too expensive in terms of computing time to justify running until errors are detected on older versions of the software. We propose the use of path coverage measures as a "quick and dirty" method for detecting many faults in complex test frameworks. We also note the possibility of using techniques developed to diversify state-space searches in model checking to diversify test focus, and an associated classification of tester changes into focus-changing and non-focus-changing modifications.
A progress report on a NASA research program for embedded computer systems software
NASA Technical Reports Server (NTRS)
Foudriat, E. C.; Senn, E. H.; Will, R. W.; Straeter, T. A.
1979-01-01
The paper presents the results of the second stage of the Multipurpose User-oriented Software Technology (MUST) program. Four primary areas of activities are discussed: programming environment, HAL/S higher-order programming language support, the Integrated Verification and Testing System (IVTS), and distributed system language research. The software development environment is provided by the interactive software invocation system. The higher-order programming language (HOL) support chosen for consideration is HAL/S mainly because at the time it was one of the few HOLs with flight computer experience and it is the language used on the Shuttle program. The overall purpose of IVTS is to provide a 'user-friendly' software testing system which is highly modular, user controlled, and cooperative in nature.
An empirical study of flight control software reliability
NASA Technical Reports Server (NTRS)
Dunham, J. R.; Pierce, J. L.
1986-01-01
The results of a laboratory experiment in flight control software reliability are reported. The experiment tests a small sample of implementations of a pitch axis control law for a PA28 aircraft with over 14 million pitch commands with varying levels of additive input and feedback noise. The testing which uses the method of n-version programming for error detection surfaced four software faults in one implementation of the control law. The small number of detected faults precluded the conduct of the error burst analyses. The pitch axis problem provides data for use in constructing a model in the prediction of the reliability of software in systems with feedback. The study is undertaken to find means to perform reliability evaluations of flight control software.
Guidelines for software inspections
NASA Technical Reports Server (NTRS)
1983-01-01
Quality control inspections are software problem finding procedures which provide defect removal as well as improvements in software functionality, maintenance, quality, and development and testing methodology is discussed. The many side benefits include education, documentation, training, and scheduling.
Experimental control in software reliability certification
NASA Technical Reports Server (NTRS)
Trammell, Carmen J.; Poore, Jesse H.
1994-01-01
There is growing interest in software 'certification', i.e., confirmation that software has performed satisfactorily under a defined certification protocol. Regulatory agencies, customers, and prospective reusers all want assurance that a defined product standard has been met. In other industries, products are typically certified under protocols in which random samples of the product are drawn, tests characteristic of operational use are applied, analytical or statistical inferences are made, and products meeting a standard are 'certified' as fit for use. A warranty statement is often issued upon satisfactory completion of a certification protocol. This paper outlines specific engineering practices that must be used to preserve the validity of the statistical certification testing protocol. The assumptions associated with a statistical experiment are given, and their implications for statistical testing of software are described.
ITOS to EDGE "Bridge" Software for Morpheus Lunar/Martian Vehicle
NASA Technical Reports Server (NTRS)
Hirsh, Robert; Fuchs, Jordan
2012-01-01
My project Involved Improving upon existing software and writing new software for the Project Morpheus Team. Specifically, I created and updated Integrated Test and Operations Systems (ITOS) user Interfaces for on-board Interaction with the vehicle during archive playback as well as live streaming data. These Interfaces are an integral part of the testing and operations for the Morpheus vehicle providing any and all information from the vehicle to evaluate instruments and insure coherence and control of the vehicle during Morpheus missions. I also created a "bridge" program for Interfacing "live" telemetry data with the Engineering DOUG Graphics Engine (EDGE) software for a graphical (standalone or VR dome) view of live Morpheus nights or archive replays, providing graphical representation of vehicle night and movement during subsequent tests and in real missions.
Li, Qiuying; Pham, Hoang
2017-01-01
In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.
Software for Automated Image-to-Image Co-registration
NASA Technical Reports Server (NTRS)
Benkelman, Cody A.; Hughes, Heidi
2007-01-01
The project objectives are: a) Develop software to fine-tune image-to-image co-registration, presuming images are orthorectified prior to input; b) Create a reusable software development kit (SDK) to enable incorporation of these tools into other software; d) provide automated testing for quantitative analysis; and e) Develop software that applies multiple techniques to achieve subpixel precision in the co-registration of image pairs.
Baldwin, Krystal L; Kannan, Vaishnavi; Flahaven, Emily L; Parks, Cassandra J; Ott, Jason M; Willett, Duwayne L
2018-01-01
Background Moving to electronic health records (EHRs) confers substantial benefits but risks unintended consequences. Modern EHRs consist of complex software code with extensive local configurability options, which can introduce defects. Defects in clinical decision support (CDS) tools are surprisingly common. Feasible approaches to prevent and detect defects in EHR configuration, including CDS tools, are needed. In complex software systems, use of test–driven development and automated regression testing promotes reliability. Test–driven development encourages modular, testable design and expanding regression test coverage. Automated regression test suites improve software quality, providing a “safety net” for future software modifications. Each automated acceptance test serves multiple purposes, as requirements (prior to build), acceptance testing (on completion of build), regression testing (once live), and “living” design documentation. Rapid-cycle development or “agile” methods are being successfully applied to CDS development. The agile practice of automated test–driven development is not widely adopted, perhaps because most EHR software code is vendor-developed. However, key CDS advisory configuration design decisions and rules stored in the EHR may prove amenable to automated testing as “executable requirements.” Objective We aimed to establish feasibility of acceptance test–driven development of clinical decision support advisories in a commonly used EHR, using an open source automated acceptance testing framework (FitNesse). Methods Acceptance tests were initially constructed as spreadsheet tables to facilitate clinical review. Each table specified one aspect of the CDS advisory’s expected behavior. Table contents were then imported into a test suite in FitNesse, which queried the EHR database to automate testing. Tests and corresponding CDS configuration were migrated together from the development environment to production, with tests becoming part of the production regression test suite. Results We used test–driven development to construct a new CDS tool advising Emergency Department nurses to perform a swallowing assessment prior to administering oral medication to a patient with suspected stroke. Test tables specified desired behavior for (1) applicable clinical settings, (2) triggering action, (3) rule logic, (4) user interface, and (5) system actions in response to user input. Automated test suite results for the “executable requirements” are shown prior to building the CDS alert, during build, and after successful build. Conclusions Automated acceptance test–driven development and continuous regression testing of CDS configuration in a commercial EHR proves feasible with open source software. Automated test–driven development offers one potential contribution to achieving high-reliability EHR configuration. Vetting acceptance tests with clinicians elicits their input on crucial configuration details early during initial CDS design and iteratively during rapid-cycle optimization. PMID:29653922
van Beek, J; Haanperä, M; Smit, P W; Mentula, S; Soini, H
2018-04-11
Culture-based assays are currently the reference standard for drug susceptibility testing for Mycobacterium tuberculosis. They provide good sensitivity and specificity but are time consuming. The objective of this study was to evaluate whether whole genome sequencing (WGS), combined with software tools for data analysis, can replace routine culture-based assays for drug susceptibility testing of M. tuberculosis. M. tuberculosis cultures sent to the Finnish mycobacterial reference laboratory in 2014 (n = 211) were phenotypically tested by Mycobacteria Growth Indicator Tube (MGIT) for first-line drug susceptibilities. WGS was performed for all isolates using the Illumina MiSeq system, and data were analysed using five software tools (PhyResSE, Mykrobe Predictor, TB Profiler, TGS-TB and KvarQ). Diagnostic time and reagent costs were estimated for both methods. The sensitivity of the five software tools to predict any resistance among strains was almost identical, ranging from 74% to 80%, and specificity was more than 95% for all software tools except for TGS-TB. The sensitivity and specificity to predict resistance to individual drugs varied considerably among the software tools. Reagent costs for MGIT and WGS were €26 and €143 per isolate respectively. Turnaround time for MGIT was 19 days (range 10-50 days) for first-line drugs, and turnaround time for WGS was estimated to be 5 days (range 3-7 days). WGS could be used as a prescreening assay for drug susceptibility testing with confirmation of resistant strains by MGIT. The functionality and ease of use of the software tools need to be improved. Copyright © 2018 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
Software Test Handbook: Software Test Guidebook. Volume 2.
1984-03-01
system test phase for usually one or more of the following three reasons. a. To simulate stress and volume tests (e.g., simulating the actions of 100...peer reviews that differ in formality, participant roles and responsibilities, output produced, and input required. a. Information Input. The input to...form (containing review summary and group decision). - Inspection- Inspection schedule and memo (defining individual roles and respon- sibilities
NASA Technical Reports Server (NTRS)
Lange, R. Connor
2012-01-01
Ever since Explorer-1, the United States' first Earth satellite, was developed and launched in 1958, JPL has developed many more spacecraft, including landers and orbiters. While these spacecraft vary greatly in their missions, capabilities,and destination, they all have something in common. All of the components of these spacecraft had to be comprehensively tested. While thorough testing is important to mitigate risk, it is also a very expensive and time consuming process. Thankfully,since virtually all of the software testing procedures for SMAP are computer controlled, these procedures can be automated. Most people testing SMAP flight software (FSW) would only need to write tests that exercise specific requirements and then check the filtered results to verify everything occurred as planned. This gives developers the ability to automatically launch tests on the testbed, distill the resulting logs into only the important information, generate validation documentation, and then deliver the documentation to management. With many of the steps in FSW testing automated, developers can use their limited time more effectively and can validate SMAP FSW modules quicker and test them more rigorously. As a result of the various benefits of automating much of the testing process, management is considering this automated tools use in future FSW validation efforts.
Dynamic assertion testing of flight control software
NASA Technical Reports Server (NTRS)
Andrews, D. M.; Mahmood, A.; Mccluskey, E. J.
1985-01-01
An experiment in using assertions to dynamically test fault tolerant flight software is described. The experiment showed that 87% of typical errors introduced into the program would be detected by assertions. Detailed analysis of the test data showed that the number of assertions needed to detect those errors could be reduced to a minimal set. The analysis also revealed that the most effective assertions tested program parameters that provided greater indirect (collateral) testing of other parameters.
Architectures and Evaluation for Adjustable Control Autonomy for Space-Based Life Support Systems
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schreckenghost, Debra K.
2001-01-01
In the past five years, a number of automation applications for control of crew life support systems have been developed and evaluated in the Adjustable Autonomy Testbed at NASA's Johnson Space Center. This paper surveys progress on an adjustable autonomous control architecture for situations where software and human operators work together to manage anomalies and other system problems. When problems occur, the level of control autonomy can be adjusted, so that operators and software agents can work together on diagnosis and recovery. In 1997 adjustable autonomy software was developed to manage gas transfer and storage in a closed life support test. Four crewmembers lived and worked in a chamber for 91 days, with both air and water recycling. CO2 was converted to O2 by gas processing systems and wheat crops. With the automation software, significantly fewer hours were spent monitoring operations. System-level validation testing of the software by interactive hybrid simulation revealed problems both in software requirements and implementation. Since that time, we have been developing multi-agent approaches for automation software and human operators, to cooperatively control systems and manage problems. Each new capability has been tested and demonstrated in realistic dynamic anomaly scenarios, using the hybrid simulation tool.
78 FR 25482 - Notice of Revised Determination on Reconsideration
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-01
...-PROGRESSIVE SOFTWARE COMPUTING, QUALITY TESTING SERVICES, INC., RAILROAD CONSTRUCTION CO. OF SOUTH JERSEY, INC..., LP, PSCI- Progressive Software Computing, Quality Testing Services, Inc., Railroad Construction Co..., ANDERSON CONSTRUCTION SERVICES, BAKER PETROLITE, BAKERCORP, BELL-FAST FIRE PROTECTION INC., BOLTTECH INC...
Automatic Parameter Tuning for the Morpheus Vehicle Using Particle Swarm Optimization
NASA Technical Reports Server (NTRS)
Birge, B.
2013-01-01
A high fidelity simulation using a PC based Trick framework has been developed for Johnson Space Center's Morpheus test bed flight vehicle. There is an iterative development loop of refining and testing the hardware, refining the software, comparing the software simulation to hardware performance and adjusting either or both the hardware and the simulation to extract the best performance from the hardware as well as the most realistic representation of the hardware from the software. A Particle Swarm Optimization (PSO) based technique has been developed that increases speed and accuracy of the iterative development cycle. Parameters in software can be automatically tuned to make the simulation match real world subsystem data from test flights. Special considerations for scale, linearity, discontinuities, can be all but ignored with this technique, allowing fast turnaround both for simulation tune up to match hardware changes as well as during the test and validation phase to help identify hardware issues. Software models with insufficient control authority to match hardware test data can be immediately identified and using this technique requires very little to no specialized knowledge of optimization, freeing model developers to concentrate on spacecraft engineering. Integration of the PSO into the Morpheus development cycle will be discussed as well as a case study highlighting the tool's effectiveness.
NASA software documentation standard software engineering program
NASA Technical Reports Server (NTRS)
1991-01-01
The NASA Software Documentation Standard (hereinafter referred to as Standard) can be applied to the documentation of all NASA software. This Standard is limited to documentation format and content requirements. It does not mandate specific management, engineering, or assurance standards or techniques. This Standard defines the format and content of documentation for software acquisition, development, and sustaining engineering. Format requirements address where information shall be recorded and content requirements address what information shall be recorded. This Standard provides a framework to allow consistency of documentation across NASA and visibility into the completeness of project documentation. This basic framework consists of four major sections (or volumes). The Management Plan contains all planning and business aspects of a software project, including engineering and assurance planning. The Product Specification contains all technical engineering information, including software requirements and design. The Assurance and Test Procedures contains all technical assurance information, including Test, Quality Assurance (QA), and Verification and Validation (V&V). The Management, Engineering, and Assurance Reports is the library and/or listing of all project reports.
15 CFR 740.9 - Temporary imports, exports, reexports, and transfers (in-country) (TMP).
Code of Federal Regulations, 2014 CFR
2014-01-01
... commodities and software may be placed in a bonded warehouse or a storage facility provided that the exporter... the end of the beta test period as defined by the software producer or, if the software producer does... software. (a) Temporary exports, reexports, and transfers (in-country). License Exception TMP authorizes...
A Reconfigurable Simulation-Based Test System for Automatically Assessing Software Operating Skills
ERIC Educational Resources Information Center
Su, Jun-Ming; Lin, Huan-Yu
2015-01-01
In recent years, software operating skills, the ability in computer literacy to solve problems using specific software, has become much more important. A great deal of research has also proven that students' software operating skills can be efficiently improved by practicing customized virtual and simulated examinations. However, constructing…
49 CFR 238.105 - Train electronic hardware and software safety.
Code of Federal Regulations, 2010 CFR
2010-10-01
... and software system safety as part of the pre-revenue service testing of the equipment. (d)(1... safely by initiating a full service brake application in the event of a hardware or software failure that... 49 Transportation 4 2010-10-01 2010-10-01 false Train electronic hardware and software safety. 238...
Estimating Software Effort Hours for Major Defense Acquisition Programs
ERIC Educational Resources Information Center
Wallshein, Corinne C.
2010-01-01
Software Cost Estimation (SCE) uses labor hours or effort required to conceptualize, develop, integrate, test, field, or maintain program components. Department of Defense (DoD) SCE can use initial software data parameters to project effort hours for large, software-intensive programs for contractors reporting the top levels of process maturity,…
An experiment in software reliability: Additional analyses using data from automated replications
NASA Technical Reports Server (NTRS)
Dunham, Janet R.; Lauterbach, Linda A.
1988-01-01
A study undertaken to collect software error data of laboratory quality for use in the development of credible methods for predicting the reliability of software used in life-critical applications is summarized. The software error data reported were acquired through automated repetitive run testing of three independent implementations of a launch interceptor condition module of a radar tracking problem. The results are based on 100 test applications to accumulate a sufficient sample size for error rate estimation. The data collected is used to confirm the results of two Boeing studies reported in NASA-CR-165836 Software Reliability: Repetitive Run Experimentation and Modeling, and NASA-CR-172378 Software Reliability: Additional Investigations into Modeling With Replicated Experiments, respectively. That is, the results confirm the log-linear pattern of software error rates and reject the hypothesis of equal error rates per individual fault. This rejection casts doubt on the assumption that the program's failure rate is a constant multiple of the number of residual bugs; an assumption which underlies some of the current models of software reliability. data raises new questions concerning the phenomenon of interacting faults.
STRS Radio Service Software for NASA's SCaN Testbed
NASA Technical Reports Server (NTRS)
Mortensen, Dale J.; Bishop, Daniel Wayne; Chelmins, David T.
2012-01-01
NASAs Space Communication and Navigation(SCaN) Testbed was launched to the International Space Station in 2012. The objective is to promote new software defined radio technologies and associated software application reuse, enabled by this first flight of NASAs Space Telecommunications Radio System(STRS) architecture standard. Pre-launch testing with the testbeds software defined radios was performed as part of system integration. Radio services for the JPL SDR were developed during system integration to allow the waveform application to operate properly in the space environment, especially considering thermal effects. These services include receiver gain control, frequency offset, IQ modulator balance, and transmit level control. Development, integration, and environmental testing of the radio services will be described. The added software allows the waveform application to operate properly in the space environment, and can be reused by future experimenters testing different waveform applications. Integrating such services with the platform provided STRS operating environment will attract more users, and these services are candidates for interface standardization via STRS.
STRS Radio Service Software for NASA's SCaN Testbed
NASA Technical Reports Server (NTRS)
Mortensen, Dale J.; Bishop, Daniel Wayne; Chelmins, David T.
2013-01-01
NASA's Space Communication and Navigation(SCaN) Testbed was launched to the International Space Station in 2012. The objective is to promote new software defined radio technologies and associated software application reuse, enabled by this first flight of NASA's Space Telecommunications Radio System (STRS) architecture standard. Pre-launch testing with the testbed's software defined radios was performed as part of system integration. Radio services for the JPL SDR were developed during system integration to allow the waveform application to operate properly in the space environment, especially considering thermal effects. These services include receiver gain control, frequency offset, IQ modulator balance, and transmit level control. Development, integration, and environmental testing of the radio services will be described. The added software allows the waveform application to operate properly in the space environment, and can be reused by future experimenters testing different waveform applications. Integrating such services with the platform provided STRS operating environment will attract more users, and these services are candidates for interface standardization via STRS.
NASA Technical Reports Server (NTRS)
Briand, Lionel C.; Basili, Victor R.; Hetmanski, Christopher J.
1992-01-01
Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault density components so that the testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents an alternative approach for constructing such models that is intended to fulfill specific software engineering needs (i.e. dealing with partial/incomplete information and creating models that are easy to interpret). Our approach to classification is as follows: (1) to measure the software system to be considered; and (2) to build multivariate stochastic models for prediction. We present experimental results obtained by classifying FORTRAN components developed at the NASA/GSFC into two fault density classes: low and high. Also we evaluate the accuracy of the model and the insights it provides into the software process.
Standardized development of computer software. Part 2: Standards
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1978-01-01
This monograph contains standards for software development and engineering. The book sets forth rules for design, specification, coding, testing, documentation, and quality assurance audits of software; it also contains detailed outlines for the documentation to be produced.
Experiences in integrating auto-translated state-chart designs for model checking
NASA Technical Reports Server (NTRS)
Pingree, P. J.; Benowitz, E. G.
2003-01-01
In the complex environment of JPL's flight missions with increasing dependency on advanced software designs, traditional software validation methods of simulation and testing are being stretched to adequately cover the needs of software development.
Application of software technology to automatic test data analysis
NASA Technical Reports Server (NTRS)
Stagner, J. R.
1991-01-01
The verification process for a major software subsystem was partially automated as part of a feasibility demonstration. The methods employed are generally useful and applicable to other types of subsystems. The effort resulted in substantial savings in test engineer analysis time and offers a method for inclusion of automatic verification as a part of regression testing.
2005 8th Annual Systems Engineering Conference. Volume 4, Thursday
2005-10-27
requirements, allocation , and utilization statistics Operations Decisions Acquisition Decisions Resource Management — Integrated Requirements/ Allocation ...Quality Improvement Consultants, Inc. “Automated Software Testing Increases Test Quality and Coverage Resulting in Improved Software Reliability.”, Mr...Steven Ligon, SAIC The Return of Discipline, Ms. Jacqueline Townsend, Air Force Materiel Command Track 4 - Net Centric Operations: Testing Net-Centric
NASA Astrophysics Data System (ADS)
Othman, Rozmie R.; Ahmad, Mohd Zamri Zahir; Ali, Mohd Shaiful Aziz Rashid; Zakaria, Hasneeza Liza; Rahman, Md. Mostafijur
2015-05-01
Consuming 40 to 50 percent of software development cost, software testing is one of the most resource consuming activities in software development lifecycle. To ensure an acceptable level of quality and reliability of a typical software product, it is desirable to test every possible combination of input data under various configurations. Due to combinatorial explosion problem, considering all exhaustive testing is practically impossible. Resource constraints, costing factors as well as strict time-to-market deadlines are amongst the main factors that inhibit such consideration. Earlier work suggests that sampling strategy (i.e. based on t-way parameter interaction or called as t-way testing) can be effective to reduce number of test cases without effecting the fault detection capability. However, for a very large system, even t-way strategy will produce a large test suite that need to be executed. In the end, only part of the planned test suite can be executed in order to meet the aforementioned constraints. Here, there is a need for test engineers to measure the effectiveness of partially executed test suite in order for them to assess the risk they have to take. Motivated by the abovementioned problem, this paper presents the effectiveness comparison of partially executed t-way test suite generated by existing strategies using tuples coverage method. Here, test engineers can predict the effectiveness of the testing process if only part of the original test cases is executed.
Validating New Software for Semiautomated Liver Volumetry--Better than Manual Measurement?
Noschinski, L E; Maiwald, B; Voigt, P; Wiltberger, G; Kahn, T; Stumpp, P
2015-09-01
This prospective study compared a manual program for liver volumetry with semiautomated software. The hypothesis was that the semiautomated software would be faster, more accurate and less dependent on the evaluator's experience. Ten patients undergoing hemihepatectomy were included in this IRB approved study after written informed consent. All patients underwent a preoperative abdominal 3-phase CT scan, which was used for whole liver volumetry and volume prediction for the liver part to be resected. Two different types of software were used: 1) manual method: borders of the liver had to be defined per slice by the user; 2) semiautomated software: automatic identification of liver volume with manual assistance for definition of Couinaud segments. Measurements were done by six observers with different experience levels. Water displacement volumetry immediately after partial liver resection served as the gold standard. The resected part was examined with a CT scan after displacement volumetry. Volumetry of the resected liver scan showed excellent correlation to water displacement volumetry (manual: ρ = 0.997; semiautomated software: ρ = 0.995). The difference between the predicted volume and the real volume was significantly smaller with the semiautomated software than with the manual method (33% vs. 57%, p = 0.002). The semiautomated software was almost four times faster for volumetry of the whole liver (manual: 6:59 ± 3:04 min; semiautomated: 1:47 ± 1:11 min). Both methods for liver volumetry give an estimated liver volume close to the real one. The tested semiautomated software is faster, more accurate in predicting the volume of the resected liver part, gives more reproducible results and is less dependent on the user's experience. Both tested types of software allow exact volumetry of resected liver parts. Preoperative prediction can be performed more accurately with the semiautomated software. The semiautomated software is nearly four times faster than the tested manual program and less dependent on the user's experience. © Georg Thieme Verlag KG Stuttgart · New York.
Product-oriented Software Certification Process for Software Synthesis
NASA Technical Reports Server (NTRS)
Nelson, Stacy; Fischer, Bernd; Denney, Ewen; Schumann, Johann; Richardson, Julian; Oh, Phil
2004-01-01
The purpose of this document is to propose a product-oriented software certification process to facilitate use of software synthesis and formal methods. Why is such a process needed? Currently, software is tested until deemed bug-free rather than proving that certain software properties exist. This approach has worked well in most cases, but unfortunately, deaths still occur due to software failure. Using formal methods (techniques from logic and discrete mathematics like set theory, automata theory and formal logic as opposed to continuous mathematics like calculus) and software synthesis, it is possible to reduce this risk by proving certain software properties. Additionally, software synthesis makes it possible to automate some phases of the traditional software development life cycle resulting in a more streamlined and accurate development process.
Towards an Open, Distributed Software Architecture for UxS Operations
NASA Technical Reports Server (NTRS)
Cross, Charles D.; Motter, Mark A.; Neilan, James H.; Qualls, Garry D.; Rothhaar, Paul M.; Tran, Loc; Trujillo, Anna C.; Allen, B. Danette
2015-01-01
To address the growing need to evaluate, test, and certify an ever expanding ecosystem of UxS platforms in preparation of cultural integration, NASA Langley Research Center's Autonomy Incubator (AI) has taken on the challenge of developing a software framework in which UxS platforms developed by third parties can be integrated into a single system which provides evaluation and testing, mission planning and operation, and out-of-the-box autonomy and data fusion capabilities. This software framework, named AEON (Autonomous Entity Operations Network), has two main goals. The first goal is the development of a cross-platform, extensible, onboard software system that provides autonomy at the mission execution and course-planning level, a highly configurable data fusion framework sensitive to the platform's available sensor hardware, and plug-and-play compatibility with a wide array of computer systems, sensors, software, and controls hardware. The second goal is the development of a ground control system that acts as a test-bed for integration of the proposed heterogeneous fleet, and allows for complex mission planning, tracking, and debugging capabilities. The ground control system should also be highly extensible and allow plug-and-play interoperability with third party software systems. In order to achieve these goals, this paper proposes an open, distributed software architecture which utilizes at its core the Data Distribution Service (DDS) standards, established by the Object Management Group (OMG), for inter-process communication and data flow. The design decisions proposed herein leverage the advantages of existing robotics software architectures and the DDS standards to develop software that is scalable, high-performance, fault tolerant, modular, and readily interoperable with external platforms and software.
SEPAC software configuration control plan and procedures, revision 1
NASA Technical Reports Server (NTRS)
1981-01-01
SEPAC Software Configuration Control Plan and Procedures are presented. The objective of the software configuration control is to establish the process for maintaining configuration control of the SEPAC software beginning with the baselining of SEPAC Flight Software Version 1 and encompass the integration and verification tests through Spacelab Level IV Integration. They are designed to provide a simplified but complete configuration control process. The intent is to require a minimum amount of paperwork but provide total traceability of SEPAC software.
ACUTE TO CHRONIC ESTIMATION SOFTWARE FOR WINDOWS
Chronic No-Observed Effect Concentrations (NOEC) are commonly determined by either using acute-to-chronic ratios or by performing an ANOVA on chronic test data; both require lengthy and expensive chronic test results. Acute-to-Chronic Estimation (ACE) software was developed to p...
ENVIRONMENTAL METHODS TESTING SITE PROJECT: DATA MANAGEMENT PROCEDURES PLAN
The Environmental Methods Testing Site (EMTS) Data Management Procedures Plan identifies the computer hardware and software resources used in the EMTS project. It identifies the major software packages that are available for use by principal investigators for the analysis of data...
Topalov, Angel A; Katsounaros, Ioannis; Meier, Josef C; Klemm, Sebastian O; Mayrhofer, Karl J J
2011-11-01
This paper describes a system for performing electrochemical catalyst testing where all hardware components are controlled simultaneously using a single LabVIEW-based software application. The software that we developed can be operated in both manual mode for exploratory investigations and automatic mode for routine measurements, by using predefined execution procedures. The latter enables the execution of high-throughput or combinatorial investigations, which decrease substantially the time and cost for catalyst testing. The software was constructed using a modular architecture which simplifies the modification or extension of the system, depending on future needs. The system was tested by performing stability tests of commercial fuel cell electrocatalysts, and the advantages of the developed system are discussed. © 2011 American Institute of Physics
NASA Astrophysics Data System (ADS)
Kristianti, Y.; Prabawanto, S.; Suhendra, S.
2017-09-01
This study aims to examine the ability of critical thinking and students who attain learning mathematics with learning model ASSURE assisted Autograph software. The design of this study was experimental group with pre-test and post-test control group. The experimental group obtained a mathematics learning with ASSURE-assisted model Autograph software and the control group acquired the mathematics learning with the conventional model. The data are obtained from the research results through critical thinking skills tests. This research was conducted at junior high school level with research population in one of junior high school student in Subang Regency of Lesson Year 2016/2017 and research sample of class VIII student in one of junior high school in Subang Regency for 2 classes. Analysis of research data is administered quantitatively. Quantitative data analysis was performed on the normalized gain level between the two sample groups using a one-way anova test. The results show that mathematics learning with ASSURE assisted model Autograph software can improve the critical thinking ability of junior high school students. Mathematical learning using ASSURE-assisted model Autograph software is significantly better in improving the critical thinking skills of junior high school students compared with conventional models.
Validation of highly reliable, real-time knowledge-based systems
NASA Technical Reports Server (NTRS)
Johnson, Sally C.
1988-01-01
Knowledge-based systems have the potential to greatly increase the capabilities of future aircraft and spacecraft and to significantly reduce support manpower needed for the space station and other space missions. However, a credible validation methodology must be developed before knowledge-based systems can be used for life- or mission-critical applications. Experience with conventional software has shown that the use of good software engineering techniques and static analysis tools can greatly reduce the time needed for testing and simulation of a system. Since exhaustive testing is infeasible, reliability must be built into the software during the design and implementation phases. Unfortunately, many of the software engineering techniques and tools used for conventional software are of little use in the development of knowledge-based systems. Therefore, research at Langley is focused on developing a set of guidelines, methods, and prototype validation tools for building highly reliable, knowledge-based systems. The use of a comprehensive methodology for building highly reliable, knowledge-based systems should significantly decrease the time needed for testing and simulation. A proven record of delivering reliable systems at the beginning of the highly visible testing and simulation phases is crucial to the acceptance of knowledge-based systems in critical applications.
HITCal: a software tool for analysis of video head impulse test responses.
Rey-Martinez, Jorge; Batuecas-Caletrio, Angel; Matiño, Eusebi; Perez Fernandez, Nicolás
2015-09-01
The developed software (HITCal) may be a useful tool in the analysis and measurement of the saccadic video head impulse test (vHIT) responses and with the experience obtained during its use the authors suggest that HITCal is an excellent method for enhanced exploration of vHIT outputs. To develop a (software) method to analyze and explore the vHIT responses, mainly saccades. HITCal was written using a computational development program; the function to access a vHIT file was programmed; extended head impulse exploration and measurement tools were created and an automated saccade analysis was developed using an experimental algorithm. For pre-release HITCal laboratory tests, a database of head impulse tests (HITs) was created with the data collected retrospectively in three reference centers. This HITs database was evaluated by humans and was also computed with HITCal. The authors have successfully built HITCal and it has been released as open source software; the developed software was fully operative and all the proposed characteristics were incorporated in the released version. The automated saccades algorithm implemented in HITCal has good concordance with the assessment by human observers (Cohen's kappa coefficient = 0.7).
Web-Based Software for Managing Research
NASA Technical Reports Server (NTRS)
Hoadley, Sherwood T.; Ingraldi, Anthony M.; Gough, Kerry M.; Fox, Charles; Cronin, Catherine K.; Hagemann, Andrew G.; Kemmerly, Guy T.; Goodman, Wesley L.
2007-01-01
aeroCOMPASS is a software system, originally designed to aid in the management of wind tunnels at Langley Research Center, that could be adapted to provide similar aid to other enterprises in which research is performed in common laboratory facilities by users who may be geographically dispersed. Included in aeroCOMPASS is Web-interface software that provides a single, convenient portal to a set of project- and test-related software tools and other application programs. The heart of aeroCOMPASS is a user-oriented document-management software subsystem that enables geographically dispersed users to easily share and manage a variety of documents. A principle of "write once, read many" is implemented throughout aeroCOMPASS to eliminate the need for multiple entry of the same information. The Web framework of aeroCOMPASS provides links to client-side application programs that are fully integrated with databases and server-side application programs. Other subsystems of aeroCOMPASS include ones for reserving hardware, tracking of requests and feedback from users, generating interactive notes, administration of a customer-satisfaction questionnaire, managing execution of tests, managing archives of metadata about tests, planning tests, and providing online help and instruction for users.
Requirements Analysis for Large Ada Programs: Lessons Learned on CCPDS- R
1989-12-01
when the design had matured and This approach was not optimal from the formal the SRS role was to be the tester’s contract, implemen- testing and...on the software development CPU processing load. These constraints primar- process is the necessity to include sufficient testing ily affect algorithm...allocations and timing requirements are by-products of the software design process when multiple CSCls are a P R StrR eSOFTWARE ENGINEERING executed within
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chu, Tsong-Lun; Varuttamaseni, Athi; Baek, Joo-Seok
The U.S. Nuclear Regulatory Commission (NRC) encourages the use of probabilistic risk assessment (PRA) technology in all regulatory matters, to the extent supported by the state-of-the-art in PRA methods and data. Although much has been accomplished in the area of risk-informed regulation, risk assessment for digital systems has not been fully developed. The NRC established a plan for research on digital systems to identify and develop methods, analytical tools, and regulatory guidance for (1) including models of digital systems in the PRAs of nuclear power plants (NPPs), and (2) incorporating digital systems in the NRC's risk-informed licensing and oversight activities.more » Under NRC's sponsorship, Brookhaven National Laboratory (BNL) explored approaches for addressing the failures of digital instrumentation and control (I and C) systems in the current NPP PRA framework. Specific areas investigated included PRA modeling digital hardware, development of a philosophical basis for defining software failure, and identification of desirable attributes of quantitative software reliability methods. Based on the earlier research, statistical testing is considered a promising method for quantifying software reliability. This paper describes a statistical software testing approach for quantifying software reliability and applies it to the loop-operating control system (LOCS) of an experimental loop of the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL).« less
NASA Astrophysics Data System (ADS)
Frailis, M.; Maris, M.; Zacchei, A.; Morisset, N.; Rohlfs, R.; Meharga, M.; Binko, P.; Türler, M.; Galeotta, S.; Gasparo, F.; Franceschi, E.; Butler, R. C.; D'Arcangelo, O.; Fogliani, S.; Gregorio, A.; Lowe, S. R.; Maggio, G.; Malaspina, M.; Mandolesi, N.; Manzato, P.; Pasian, F.; Perrotta, F.; Sandri, M.; Terenzi, L.; Tomasi, M.; Zonca, A.
2009-12-01
The Level 1 of the Planck LFI Data Processing Centre (DPC) is devoted to the handling of the scientific and housekeeping telemetry. It is a critical component of the Planck ground segment which has to strictly commit to the project schedule to be ready for the launch and flight operations. In order to guarantee the quality necessary to achieve the objectives of the Planck mission, the design and development of the Level 1 software has followed the ESA Software Engineering Standards. A fundamental step in the software life cycle is the Verification and Validation of the software. The purpose of this work is to show an example of procedures, test development and analysis successfully applied to a key software project of an ESA mission. We present the end-to-end validation tests performed on the Level 1 of the LFI-DPC, by detailing the methods used and the results obtained. Different approaches have been used to test the scientific and housekeeping data processing. Scientific data processing has been tested by injecting signals with known properties directly into the acquisition electronics, in order to generate a test dataset of real telemetry data and reproduce as much as possible nominal conditions. For the HK telemetry processing, validation software have been developed to inject known parameter values into a set of real housekeeping packets and perform a comparison with the corresponding timelines generated by the Level 1. With the proposed validation and verification procedure, where the on-board and ground processing are viewed as a single pipeline, we demonstrated that the scientific and housekeeping processing of the Planck-LFI raw data is correct and meets the project requirements.
Developing high-quality educational software.
Johnson, Lynn A; Schleyer, Titus K L
2003-11-01
The development of effective educational software requires a systematic process executed by a skilled development team. This article describes the core skills required of the development team members for the six phases of successful educational software development. During analysis, the foundation of product development is laid including defining the audience and program goals, determining hardware and software constraints, identifying content resources, and developing management tools. The design phase creates the specifications that describe the user interface, the sequence of events, and the details of the content to be displayed. During development, the pieces of the educational program are assembled. Graphics and other media are created, video and audio scripts written and recorded, the program code created, and support documentation produced. Extensive testing by the development team (alpha testing) and with students (beta testing) is conducted. Carefully planned implementation is most likely to result in a flawless delivery of the educational software and maintenance ensures up-to-date content and software. Due to the importance of the sixth phase, evaluation, we have written a companion article on it that follows this one. The development of a CD-ROM product is described including the development team, a detailed description of the development phases, and the lessons learned from the project.
The Environmental Control and Life Support System (ECLSS) advanced automation project
NASA Technical Reports Server (NTRS)
Dewberry, Brandon S.; Carnes, Ray
1990-01-01
The objective of the environmental control and life support system (ECLSS) Advanced Automation Project is to influence the design of the initial and evolutionary Space Station Freedom Program (SSFP) ECLSS toward a man-made closed environment in which minimal flight and ground manpower is needed. Another objective includes capturing ECLSS design and development knowledge future missions. Our approach has been to (1) analyze the SSFP ECLSS, (2) envision as our goal a fully automated evolutionary environmental control system - an augmentation of the baseline, and (3) document the advanced software systems, hooks, and scars which will be necessary to achieve this goal. From this analysis, prototype software is being developed, and will be tested using air and water recovery simulations and hardware subsystems. In addition, the advanced software is being designed, developed, and tested using automation software management plan and lifecycle tools. Automated knowledge acquisition, engineering, verification and testing tools are being used to develop the software. In this way, we can capture ECLSS development knowledge for future use develop more robust and complex software, provide feedback to the knowledge based system tool community, and ensure proper visibility of our efforts.
Development of an automated asbestos counting software based on fluorescence microscopy.
Alexandrov, Maxym; Ichida, Etsuko; Nishimura, Tomoki; Aoki, Kousuke; Ishida, Takenori; Hirota, Ryuichi; Ikeda, Takeshi; Kawasaki, Tetsuo; Kuroda, Akio
2015-01-01
An emerging alternative to the commonly used analytical methods for asbestos analysis is fluorescence microscopy (FM), which relies on highly specific asbestos-binding probes to distinguish asbestos from interfering non-asbestos fibers. However, all types of microscopic asbestos analysis require laborious examination of large number of fields of view and are prone to subjective errors and large variability between asbestos counts by different analysts and laboratories. A possible solution to these problems is automated counting of asbestos fibers by image analysis software, which would lower the cost and increase the reliability of asbestos testing. This study seeks to develop a fiber recognition and counting software for FM-based asbestos analysis. We discuss the main features of the developed software and the results of its testing. Software testing showed good correlation between automated and manual counts for the samples with medium and high fiber concentrations. At low fiber concentrations, the automated counts were less accurate, leading us to implement correction mode for automated counts. While the full automation of asbestos analysis would require further improvements in accuracy of fiber identification, the developed software could already assist professional asbestos analysts and record detailed fiber dimensions for the use in epidemiological research.
A Human Reliability Based Usability Evaluation Method for Safety-Critical Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillippe Palanque; Regina Bernhaupt; Ronald Boring
2006-04-01
Recent years have seen an increasing use of sophisticated interaction techniques including in the field of safety critical interactive software [8]. The use of such techniques has been required in order to increase the bandwidth between the users and systems and thus to help them deal efficiently with increasingly complex systems. These techniques come from research and innovation done in the field of humancomputer interaction (HCI). A significant effort is currently being undertaken by the HCI community in order to apply and extend current usability evaluation techniques to these new kinds of interaction techniques. However, very little has been donemore » to improve the reliability of software offering these kinds of interaction techniques. Even testing basic graphical user interfaces remains a challenge that has rarely been addressed in the field of software engineering [9]. However, the non reliability of interactive software can jeopardize usability evaluation by showing unexpected or undesired behaviors. The aim of this SIG is to provide a forum for both researchers and practitioners interested in testing interactive software. Our goal is to define a roadmap of activities to cross fertilize usability and reliability testing of these kinds of systems to minimize duplicate efforts in both communities.« less
NASA Astrophysics Data System (ADS)
Buchari, M. A.; Mardiyanto, S.; Hendradjaya, B.
2018-03-01
Finding the existence of software defect as early as possible is the purpose of research about software defect prediction. Software defect prediction activity is required to not only state the existence of defects, but also to be able to give a list of priorities which modules require a more intensive test. Therefore, the allocation of test resources can be managed efficiently. Learning to rank is one of the approach that can provide defect module ranking data for the purposes of software testing. In this study, we propose a meta-heuristic chaotic Gaussian particle swarm optimization to improve the accuracy of learning to rank software defect prediction approach. We have used 11 public benchmark data sets as experimental data. Our overall results has demonstrated that the prediction models construct using Chaotic Gaussian Particle Swarm Optimization gets better accuracy on 5 data sets, ties in 5 data sets and gets worse in 1 data sets. Thus, we conclude that the application of Chaotic Gaussian Particle Swarm Optimization in Learning-to-Rank approach can improve the accuracy of the defect module ranking in data sets that have high-dimensional features.
Software for MR image overlay guided needle insertions: the clinical translation process
NASA Astrophysics Data System (ADS)
Ungi, Tamas; U-Thainual, Paweena; Fritz, Jan; Iordachita, Iulian I.; Flammang, Aaron J.; Carrino, John A.; Fichtinger, Gabor
2013-03-01
PURPOSE: Needle guidance software using augmented reality image overlay was translated from the experimental phase to support preclinical and clinical studies. Major functional and structural changes were needed to meet clinical requirements. We present the process applied to fulfill these requirements, and selected features that may be applied in the translational phase of other image-guided surgical navigation systems. METHODS: We used an agile software development process for rapid adaptation to unforeseen clinical requests. The process is based on iterations of operating room test sessions, feedback discussions, and software development sprints. The open-source application framework of 3D Slicer and the NA-MIC kit provided sufficient flexibility and stable software foundations for this work. RESULTS: All requirements were addressed in a process with 19 operating room test iterations. Most features developed in this phase were related to workflow simplification and operator feedback. CONCLUSION: Efficient and affordable modifications were facilitated by an open source application framework and frequent clinical feedback sessions. Results of cadaver experiments show that software requirements were successfully solved after a limited number of operating room tests.
2017-03-17
NASA engineers and test directors gather in Firing Room 3 in the Launch Control Center at NASA's Kennedy Space Center in Florida, to watch a demonstration of the automated command and control software for the agency's Space Launch System (SLS) and Orion spacecraft. In front, far right, is Charlie Blackwell-Thompson, launch director for Exploration Mission 1 (EM-1). The software is called the Ground Launch Sequencer. It will be responsible for nearly all of the launch commit criteria during the final phases of launch countdowns. The Ground and Flight Application Software Team (GFAST) demonstrated the software. It was developed by the Command, Control and Communications team in the Ground Systems Development and Operations (GSDO) Program. GSDO is helping to prepare the center for the first test flight of Orion atop the SLS on EM-1.
Lean Development with the Morpheus Simulation Software
NASA Technical Reports Server (NTRS)
Brogley, Aaron C.
2013-01-01
The Morpheus project is an autonomous robotic testbed currently in development at NASA's Johnson Space Center (JSC) with support from other centers. Its primary objectives are to test new 'green' fuel propulsion systems and to demonstrate the capability of the Autonomous Lander Hazard Avoidance Technology (ALHAT) sensor, provided by the Jet Propulsion Laboratory (JPL) on a lunar landing trajectory. If successful, these technologies and lessons learned from the Morpheus testing cycle may be incorporated into a landing descent vehicle used on the moon, an asteroid, or Mars. In an effort to reduce development costs and cycle time, the project employs lean development engineering practices in its development of flight and simulation software. The Morpheus simulation makes use of existing software packages where possible to reduce the development time. The development and testing of flight software occurs primarily through the frequent test operation of the vehicle and incrementally increasing the scope of the test. With rapid development cycles, risk of loss of the vehicle and loss of the mission are possible, but efficient progress in development would not be possible without that risk.
Software for Testing Electroactive Structural Components
NASA Technical Reports Server (NTRS)
Moses, Robert W.; Fox, Robert L.; Dimery, Archie D.; Bryant, Robert G.; Shams, Qamar
2003-01-01
A computer program generates a graphical user interface that, in combination with its other features, facilitates the acquisition and preprocessing of experimental data on the strain response, hysteresis, and power consumption of a multilayer composite-material structural component containing one or more built-in sensor(s) and/or actuator(s) based on piezoelectric materials. This program runs in conjunction with Lab-VIEW software in a computer-controlled instrumentation system. For a test, a specimen is instrumented with appliedvoltage and current sensors and with strain gauges. Once the computational connection to the test setup has been made via the LabVIEW software, this program causes the test instrumentation to step through specified configurations. If the user is satisfied with the test results as displayed by the software, the user activates an icon on a front-panel display, causing the raw current, voltage, and strain data to be digitized and saved. The data are also put into a spreadsheet and can be plotted on a graph. Graphical displays are saved in an image file for future reference. The program also computes and displays the power and the phase angle between voltage and current.
ERIC Educational Resources Information Center
Char, Cynthia
Several research and design issues to be considered when creating educational software were identified by a field test evaluation of three types of innovative software created at Bank Street College: (1) Probe, software for measuring and graphing temperature data; (2) Rescue Mission, a navigation game that illustrates the computer's use for…
NASA Astrophysics Data System (ADS)
1981-03-01
Support documentation for a second generation heliostat project is presented. Flowcharts of control software are included. Numerical and graphic test results are provided. Project management information is also provided.
Improved Ant Algorithms for Software Testing Cases Generation
Yang, Shunkun; Xu, Jiaqi
2014-01-01
Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to porduce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations. PMID:24883391
Objective and Item Banking Computer Software and Its Use in Comprehensive Achievement Monitoring.
ERIC Educational Resources Information Center
Schriber, Peter E.; Gorth, William P.
The current emphasis on objectives and test item banks for constructing more effective tests is being augmented by increasingly sophisticated computer software. Items can be catalogued in numerous ways for retrieval. The items as well as instructional objectives can be stored and test forms can be selected and printed by the computer. It is also…
The Software Element of the NASA Portable Electronic Device Radiated Emissions Investigation
NASA Technical Reports Server (NTRS)
Koppen, Sandra V.; Williams, Reuben A. (Technical Monitor)
2002-01-01
NASA Langley Research Center's (LaRC) High Intensity Radiated Fields Laboratory (HIRF Lab) recently conducted a series of electromagnetic radiated emissions tests under a cooperative agreement with Delta Airlines and an interagency agreement with the FAA. The frequency spectrum environment at a commercial airport was measured on location. The environment survey provides a comprehensive picture of the complex nature of the electromagnetic environment present in those areas outside the aircraft. In addition, radiated emissions tests were conducted on portable electronic devices (PEDs) that may be brought onboard aircraft. These tests were performed in both semi-anechoic and reverberation chambers located in the HIRF Lab. The PEDs included cell phones, laptop computers, electronic toys, and family radio systems. The data generated during the tests are intended to support the research on the effect of radiated emissions from wireless devices on aircraft systems. Both tests systems relied on customized control and data reduction software to provide test and instrument control, data acquisition, a user interface, real time data reduction, and data analysis. The software executed on PC's running MS Windows 98 and 2000, and used Agilent Pro Visual Engineering Environment (VEE) development software, Common Object Model (COM) technology, and MS Excel.
SCaN Testbed Software Development and Lessons Learned
NASA Technical Reports Server (NTRS)
Kacpura, Thomas J.; Varga, Denise M.
2012-01-01
National Aeronautics and Space Administration (NASA) has developed an on-orbit, adaptable, Software Defined Radio (SDR)Space Telecommunications Radio System (STRS)-based testbed facility to conduct a suite of experiments to advance technologies, reduce risk, and enable future mission capabilities on the International Space Station (ISS). The SCAN Testbed Project will provide NASA, industry, other Government agencies, and academic partners the opportunity to develop and field communications, navigation, and networking technologies in the laboratory and space environment based on reconfigurable, SDR platforms and the STRS Architecture.The SDRs are a new technology for NASA, and the support infrastructure they require is different from legacy, fixed function radios. SDRs offer the ability to reconfigure on-orbit communications by changing software for new waveforms and operating systems to enable new capabilities or fix any anomalies, which was not a previous option. They are not stand alone devices, but required a new approach to effectively control them and flow data. This requires extensive software to be developed to utilize the full potential of these reconfigurable platforms. The paper focuses on development, integration and testing as related to the avionics processor system, and the software required to command, control, monitor, and interact with the SDRs, as well as the other communication payload elements. An extensive effort was required to develop the flight software and meet the NASA requirements for software quality and safety. The flight avionics must be radiation tolerant, and these processors have limited capability in comparison to terrestrial counterparts. A big challenge was that there are three SDRs onboard, and interfacing with multiple SDRs simultaneously complicatesd the effort. The effort also includes ground software, which is a key element for both the command of the payload, and displaying data created by the payload. The verification of the software was an extensive effort. The challenges of specifying a suitable test matrix with reconfigurable systems that offer numerous configurations is highlighted. Since the flight system testing requires methodical, controlled testing that limits risk, a nearly identical ground system to the on-orbit flight system was required to develop the software and write verification procedures before it was installed and tested on the flight system. The development of the SCAN testbed was an accelerated effort to meet launch constraints, and this paper discusses tradeoffs made to balance needed software functionality and still maintain the schedule. Future upgrades are discussed that optimize the avionics and allow experimenters to utilize the SCAN testbed potential.
Li, Qiuying; Pham, Hoang
2017-01-01
In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance. PMID:28750091
1988-05-01
obtained from Dr. Barry Boehm’s Software 5650, Contract No. F19628-86-C-O001, Engineering Economics [1] and from T. J. ESD/MITRE Software Center Acquisition...of References 1. Boehm, Barry W., SoJtware Engineering 3. Halstead, M. H., Elements of SoJhtare Economics, Englewood Cliffs, New Science, New York...1983, pp. 639-648. 35 35 - Bibliography Beizer, B., Software System Testing and Pressman , Roger S., Software Engineering:QualtyO Assurance, New York: Van
NASA software specification and evaluation system: Software verification/validation techniques
NASA Technical Reports Server (NTRS)
1977-01-01
NASA software requirement specifications were used in the development of a system for validating and verifying computer programs. The software specification and evaluation system (SSES) provides for the effective and efficient specification, implementation, and testing of computer software programs. The system as implemented will produce structured FORTRAN or ANSI FORTRAN programs, but the principles upon which SSES is designed allow it to be easily adapted to other high order languages.
Software production methodology tested project
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1976-01-01
The history and results of a 3 1/2-year study in software development methodology are reported. The findings of this study have become the basis for DSN software development guidelines and standard practices. The article discusses accomplishments, discoveries, problems, recommendations and future directions.
Evaluation of Open-Source Hard Real Time Software Packages
NASA Technical Reports Server (NTRS)
Mattei, Nicholas S.
2004-01-01
Reliable software is, at times, hard to find. No piece of software can be guaranteed to work in every situation that may arise during its use here at Glenn Research Center or in space. The job of the Software Assurance (SA) group in the Risk Management Office is to rigorously test the software in an effort to ensure it matches the contract specifications. In some cases the SA team also researches new alternatives for selected software packages. This testing and research is an integral part of the department of Safety and Mission Assurance. Real Time operation in reference to a computer system is a particular style of handing the timing and manner with which inputs and outputs are handled. A real time system executes these commands and appropriate processing within a defined timing constraint. Within this definition there are two other classifications of real time systems: hard and soft. A soft real time system is one in which if the particular timing constraints are not rigidly met there will be no critical results. On the other hand, a hard real time system is one in which if the timing constraints are not met the results could be catastrophic. An example of a soft real time system is a DVD decoder. If the particular piece of data from the input is not decoded and displayed to the screen at exactly the correct moment nothing critical will become of it, the user may not even notice it. However, a hard real time system is needed to control the timing of fuel injections or steering on the Space Shuttle; a delay of even a fraction of a second could be catastrophic in such a complex system. The current real time system employed by most NASA projects is Wind River's VxWorks operating system. This is a proprietary operating system that can be configured to work with many of NASA s needs and it provides very accurate and reliable hard real time performance. The down side is that since it is a proprietary operating system it is also costly to implement. The prospect of replacing this somewhat costly implementation is the focus of one of the SA group s current research projects. The explosion of open source software in the last ten years has led to the development of a multitude of software solutions which were once only produced by major corporations. The benefits of these open projects include faster release and bug patching cycles as well as inexpensive if not free software solutions. The main packages for hard real time solutions under Linux are Real Time Application Interface (RTAI) and two varieties of Real Time Linux (RTL), RTLFree and RTLPro. During my time here at NASA I have been testing various hard real time solutions operating as layers on the Linux Operating System. All testing is being run on an Intel SBC 2590 which is a common embedded hardware platform. The test plan was provided to me by the Software Assurance group at the start of my internship and my job has been to test the systems by developing and executing the test cases on the hardware. These tests are constructed so that the Software Assurance group can get hard test data for a comparison between the open source and proprietary implementations of hard real time solutions.
Solar Constant (SOLCON) Experiment: Ground Support Equipment (GSE) software development
NASA Technical Reports Server (NTRS)
Gibson, M. Alan; Thomas, Susan; Wilson, Robert
1991-01-01
The Solar Constant (SOLCON) Experiment, the objective of which is to determine the solar constant value and its variability, is scheduled for launch as part of the Space Shuttle/Atmospheric Laboratory for Application and Science (ATLAS) spacelab mission. The Ground Support Equipment (GSE) software was developed to monitor and analyze the SOLCON telemetry data during flight and to test the instrument on the ground. The design and development of the GSE software are discussed. The SOLCON instrument was tested during Davos International Solar Intercomparison, 1989 and the SOLCON data collected during the tests are analyzed to study the behavior of the instrument.
1984-09-28
variables before simula- tion of model - Search for reality checks a, - Express uncertainty as a probability density distribution. a. H2 a, H-22 TWIF... probability that the software con- tains errors. This prior is updated as test failure data are accumulated. Only a p of 1 (software known to contain...discusssed; both parametric and nonparametric versions are presented. It is shown by the author that the bootstrap underlies the jackknife method and
A software simulation study of a (255,223) Reed-Solomon encoder-decoder
NASA Technical Reports Server (NTRS)
Pollara, F.
1985-01-01
A set of software programs which simulates a (255,223) Reed-Solomon encoder/decoder pair is described. The transform decoder algorithm uses a modified Euclid algorithm, and closely follows the pipeline architecture proposed for the hardware decoder. Uncorrectable error patterns are detected by a simple test, and the inverse transform is computed by a finite field FFT. Numerical examples of the decoder operation are given for some test codewords, with and without errors. The use of the software package is briefly described.
AXAF-1 High Resolution Assembly Image Model and Comparison with X-Ray Ground Test Image
NASA Technical Reports Server (NTRS)
Zissa, David E.
1999-01-01
The x-ray ground test of the AXAF-I High Resolution Mirror Assembly was completed in 1997 at the X-ray Calibration Facility at Marshall Space Flight Center. Mirror surface measurements by HDOS, alignment results from Kodak, and predicted gravity distortion in the horizontal test configuration are being used to model the x-ray test image. The Marshall Space Flight Center (MSFC) image modeling serves as a cross check with Smithsonian Astrophysical observatory modeling. The MSFC image prediction software has evolved from the MSFC model of the x-ray test of the largest AXAF-I mirror pair in 1991. The MSFC image modeling software development is being assisted by the University of Alabama in Huntsville. The modeling process, modeling software, and image prediction will be discussed. The image prediction will be compared with the x-ray test results.
NASA Technical Reports Server (NTRS)
Pajak, J. A.
1981-01-01
A data acquisition software program developed to operate in conjunction with the automated control system of the 25 kW PM Electric Power System Breadboard Test facility is described. The proram provides limited interactive control of the breadboard test while acquiring data and monitoring parameters, allowing unattended continuous operation. The breadboard test facility has two positions for operating separate configurations. The main variable in each test setup is the high voltage Ni-Cd battery.
Programs for Testing an SSME-Monitoring System
NASA Technical Reports Server (NTRS)
Lang, Andre; Cecil, Jimmie; Heusinger, Ralph; Freestone, Kathleen; Blue, Lisa; Wilkerson, DeLisa; McMahon, Leigh Anne; Hall, Richard B.; Varnavas, Kosta; Smith, Keary;
2007-01-01
A suite of computer programs has been developed for special test equipment (STE) that is used in verification testing of the Health Management Computer Integrated Rack Assembly (HMCIRA), a ground-based system of analog and digital electronic hardware and software for "flight-like" testing for development of components of an advanced health-management system for the space shuttle main engine (SSME). The STE software enables the STE to simulate the analog input and the data flow of an SSME test firing from start to finish.
MoniQA: a general approach to monitor quality assurance
NASA Astrophysics Data System (ADS)
Jacobs, J.; Deprez, T.; Marchal, G.; Bosmans, H.
2006-03-01
MoniQA ("Monitor Quality Assurance") is a new, non-commercial, independent quality assurance software application developed in our medical physics team. It is a complete Java TM - based modular environment for the evaluation of radiological viewing devices and it thus fits in the global quality assurance network of our (film less) radiology department. The purpose of the software tool is to guide the medical physicist through an acceptance protocol and the radiologist through a constancy check protocol by presentation of the necessary test patterns and by automated data collection. Data are then sent to a central management system for further analysis. At the moment more than 55 patterns have been implemented, which can be grouped in schemes to implement protocols (i.e. AAPMtg18, DIN and EUREF). Some test patterns are dynamically created and 'drawn' on the viewing device with random parameters as is the case in a recently proposed new pattern for constancy testing. The software is installed on 35 diagnostic stations (70 monitors) in a film less radiology department. Learning time was very limited. A constancy check -with the new pattern that assesses luminance decrease, resolution problems and geometric distortion- takes only 2 minutes and 28 seconds per monitor. The modular approach of the software allows the evaluation of new or emerging test patterns. We will report on the software and its usability: practicality of the constancy check tests in our hospital and on the results from acceptance tests of viewing stations for digital mammography.
Educational interactive multimedia software: The impact of interactivity on learning
NASA Astrophysics Data System (ADS)
Reamon, Derek Trent
This dissertation discusses the design, development, deployment and testing of two versions of educational interactive multimedia software. Both versions of the software are focused on teaching mechanical engineering undergraduates about the fundamentals of direct-current (DC) motor physics and selection. The two versions of Motor Workshop software cover the same basic materials on motors, but differ in the level of interactivity between the students and the software. Here, the level of interactivity refers to the particular role of the computer in the interaction between the user and the software. In one version, the students navigate through information that is organized by topic, reading text, and viewing embedded video clips; this is referred to as "low-level interactivity" software because the computer simply presents the content. In the other version, the students are given a task to accomplish---they must design a small motor-driven 'virtual' vehicle that competes against computer-generated opponents. The interaction is guided by the software which offers advice from 'experts' and provides contextual information; we refer to this as "high-level interactivity" software because the computer is actively participating in the interaction. The software was used in two sets of experiments, where students using the low-level interactivity software served as the 'control group,' and students using the highly interactive software were the 'treatment group.' Data, including pre- and post-performance tests, questionnaire responses, learning style characterizations, activity tracking logs and videotapes were collected for analysis. Statistical and observational research methods were applied to the various data to test the hypothesis that the level of interactivity effects the learning situation, with higher levels of interactivity being more effective for learning. The results show that both the low-level and high-level interactive versions of the software were effective in promoting learning about the subject of motors. The focus of learning varied between users of the two versions, however. The low-level version was more effective for teaching concepts and terminology, while the high-level version seemed to be more effective for teaching engineering applications.
Mars Science Laboratory Workstation Test Set
NASA Technical Reports Server (NTRS)
Henriquez, David A.; Canham, Timothy K.; Chang, Johnny T.; Villaume, Nathaniel
2009-01-01
The Mars Science Laboratory developed the Workstation TestSet (WSTS) is a computer program that enables flight software development on virtual MSL avionics. The WSTS is the non-real-time flight avionics simulator that is designed to be completely software-based and run on a workstation class Linux PC.
Attributes Effecting Software Testing Estimation; Is Organizational Trust an Issue?
ERIC Educational Resources Information Center
Hammoud, Wissam
2013-01-01
This quantitative correlational research explored the potential association between the levels of organizational trust and the software testing estimation. This was conducted by exploring the relationships between organizational trust, tester's expertise, organizational technology used, and the number of hours, number of testers, and time-coding…
P2P proteomics -- data sharing for enhanced protein identification
2012-01-01
Background In order to tackle the important and challenging problem in proteomics of identifying known and new protein sequences using high-throughput methods, we propose a data-sharing platform that uses fully distributed P2P technologies to share specifications of peer-interaction protocols and service components. By using such a platform, information to be searched is no longer centralised in a few repositories but gathered from experiments in peer proteomics laboratories, which can subsequently be searched by fellow researchers. Methods The system distributively runs a data-sharing protocol specified in the Lightweight Communication Calculus underlying the system through which researchers interact via message passing. For this, researchers interact with the system through particular components that link to database querying systems based on BLAST and/or OMSSA and GUI-based visualisation environments. We have tested the proposed platform with data drawn from preexisting MS/MS data reservoirs from the 2006 ABRF (Association of Biomolecular Resource Facilities) test sample, which was extensively tested during the ABRF Proteomics Standards Research Group 2006 worldwide survey. In particular we have taken the data available from a subset of proteomics laboratories of Spain's National Institute for Proteomics, ProteoRed, a network for the coordination, integration and development of the Spanish proteomics facilities. Results and Discussion We performed queries against nine databases including seven ProteoRed proteomics laboratories, the NCBI Swiss-Prot database and the local database of the CSIC/UAB Proteomics Laboratory. A detailed analysis of the results indicated the presence of a protein that was supported by other NCBI matches and highly scored matches in several proteomics labs. The analysis clearly indicated that the protein was a relatively high concentrated contaminant that could be present in the ABRF sample. This fact is evident from the information that could be derived from the proposed P2P proteomics system, however it is not straightforward to arrive to the same conclusion by conventional means as it is difficult to discard organic contamination of samples. The actual presence of this contaminant was only stated after the ABRF study of all the identifications reported by the laboratories. PMID:22293032
Shuttle avionics software development trials: Tribulations and successes, the backup flight system
NASA Technical Reports Server (NTRS)
Chevers, E. S.
1985-01-01
The development and verification of the Backup Flight System software (BFS) is discussed. The approach taken for the BFS was to develop a very simple and straightforward software program and then test it in every conceivable manner. The result was a program that contained approximately 12,000 full words including ground checkout and the built in test program for the computer. To perform verification, a series of tests was defined using the actual flight type hardware and simulated flight conditions. Then simulated flights were flown and detailed performance analysis was conducted. The intent of most BFS tests was to demonstrate that a stable flightpath could be obtained after engagement from an anomalous initial condition. The extention of the BFS to meet the requirements of the orbital flight test phase is also described.
Selecting Really Excellent Software for Young Adults.
ERIC Educational Resources Information Center
Polly, Jean Armour
1985-01-01
This article discusses criteria of a good computer software package to aid the public librarian in the building, weeding, and maintenance of a software collection for young adults. Highlights include manuals or documentation; bells, whistles, and color; and the true test of time. (EJS)
Software for an Experimental Air-Ground Data Link : Volume 2. System Operation Manual
DOT National Transportation Integrated Search
1975-10-01
This report documents the complete software system developed for the Experimental Data Link System which was implemented for flight test during the Air-Ground Data Link Development Program (FAA-TSC- Project Number FA-13). The software development is ...
Software Assurance Curriculum Project Volume 2: Undergraduate Course Outlines
2010-08-01
Contents Acknowledgments iii Abstract v 1 An Undergraduate Curriculum Focus on Software Assurance 1 2 Computer Science I 7 3 Computer Science II...confidence that can be integrated into traditional software development and acquisition process models . Thus, in addition to a technology focus...testing throughout the software development life cycle ( SDLC ) AP Security and complexity—system development challenges: security failures
Modular Infrastructure for Rapid Flight Software Development
NASA Technical Reports Server (NTRS)
Pires, Craig
2010-01-01
This slide presentation reviews the use of modular infrastructure to assist in the development of flight software. A feature of this program is the use of model based approach for application unique software. A review of two programs that this approach was use on are: the development of software for Hover Test Vehicle (HTV), and Lunar Atmosphere and Dust Environment Experiment (LADEE).
Sanyal, Parikshit; Ganguli, Prosenjit; Barui, Sanghita; Deb, Prabal
2018-01-01
The Pap stained cervical smear is a screening tool for cervical cancer. Commercial systems are used for automated screening of liquid based cervical smears. However, there is no image analysis software used for conventional cervical smears. The aim of this study was to develop and test the diagnostic accuracy of a software for analysis of conventional smears. The software was developed using Python programming language and open source libraries. It was standardized with images from Bethesda Interobserver Reproducibility Project. One hundred and thirty images from smears which were reported Negative for Intraepithelial Lesion or Malignancy (NILM), and 45 images where some abnormality has been reported, were collected from the archives of the hospital. The software was then tested on the images. The software was able to segregate images based on overall nuclear: cytoplasmic ratio, coefficient of variation (CV) in nuclear size, nuclear membrane irregularity, and clustering. 68.88% of abnormal images were flagged by the software, as well as 19.23% of NILM images. The major difficulties faced were segmentation of overlapping cell clusters and separation of neutrophils. The software shows potential as a screening tool for conventional cervical smears; however, further refinement in technique is required.
NASA Technical Reports Server (NTRS)
Hammrs, Stephan R.
2008-01-01
Virtual Satellite (VirtualSat) is a computer program that creates an environment that facilitates the development, verification, and validation of flight software for a single spacecraft or for multiple spacecraft flying in formation. In this environment, enhanced functionality and autonomy of navigation, guidance, and control systems of a spacecraft are provided by a virtual satellite that is, a computational model that simulates the dynamic behavior of the spacecraft. Within this environment, it is possible to execute any associated software, the development of which could benefit from knowledge of, and possible interaction (typically, exchange of data) with, the virtual satellite. Examples of associated software include programs for simulating spacecraft power and thermal- management systems. This environment is independent of the flight hardware that will eventually host the flight software, making it possible to develop the software simultaneously with, or even before, the hardware is delivered. Optionally, by use of interfaces included in VirtualSat, hardware can be used instead of simulated. The flight software, coded in the C or C++ programming language, is compilable and loadable into VirtualSat without any special modifications. Thus, VirtualSat can serve as a relatively inexpensive software test-bed for development test, integration, and post-launch maintenance of spacecraft flight software.
Software Template for Instruction in Mathematics
NASA Technical Reports Server (NTRS)
Shelton, Robert O.; Moebes, Travis A.; Beall, Anna
2005-01-01
Intelligent Math Tutor (IMT) is a software system that serves as a template for creating software for teaching mathematics. IMT can be easily connected to artificial-intelligence software and other analysis software through input and output of files. IMT provides an easy-to-use interface for generating courses that include tests that contain both multiple-choice and fill-in-the-blank questions, and enables tracking of test scores. IMT makes it easy to generate software for Web-based courses or to manufacture compact disks containing executable course software. IMT also can function as a Web-based application program, with features that run quickly on the Web, while retaining the intelligence of a high-level language application program with many graphics. IMT can be used to write application programs in text, graphics, and/or sound, so that the programs can be tailored to the needs of most handicapped persons. The course software generated by IMT follows a "back to basics" approach of teaching mathematics by inducing the student to apply creative mathematical techniques in the process of learning. Students are thereby made to discover mathematical fundamentals and thereby come to understand mathematics more deeply than they could through simple memorization.
NASA Technical Reports Server (NTRS)
Hall, Drew P.; Ly, William; Howard, Richard T.; Weir, John; Rakoczy, John; Roe, Fred (Technical Monitor)
2002-01-01
The software development for an upgrade to the Hobby-Eberly Telescope (HET) was done in LABView. In order to improve the performance of the HET at the McDonald Observatory, a closed-loop system had to be implemented to keep the mirror segments aligned during periods of observation. The control system, called the Segment Alignment Maintenance System (SAMs), utilized inductive sensors to measure the relative motions of the mirror segments. Software was developed in LABView to tie the sensors, operator interface, and mirror-control motors together. Developing the software in LABView allowed the system to be flexible, understandable, and able to be modified by the end users. Since LABView is built using block diagrams, the software naturally followed the designed control system's block and flow diagrams, and individual software blocks could be easily verified. LABView's many built-in display routines allowed easy visualization of diagnostic and health-monitoring data during testing. Also, since LABView is a multi-platform software package, different programmers could develop the code remotely on various types of machines. LABView s ease of use facilitated rapid prototyping and field testing. There were some unanticipated difficulties in the software development, but the use of LABView as the software "language" for the development of SAMs contributed to the overall success of the project.
NASA Software Documentation Standard
NASA Technical Reports Server (NTRS)
1991-01-01
The NASA Software Documentation Standard (hereinafter referred to as "Standard") is designed to support the documentation of all software developed for NASA; its goal is to provide a framework and model for recording the essential information needed throughout the development life cycle and maintenance of a software system. The NASA Software Documentation Standard can be applied to the documentation of all NASA software. The Standard is limited to documentation format and content requirements. It does not mandate specific management, engineering, or assurance standards or techniques. This Standard defines the format and content of documentation for software acquisition, development, and sustaining engineering. Format requirements address where information shall be recorded and content requirements address what information shall be recorded. This Standard provides a framework to allow consistency of documentation across NASA and visibility into the completeness of project documentation. The basic framework consists of four major sections (or volumes). The Management Plan contains all planning and business aspects of a software project, including engineering and assurance planning. The Product Specification contains all technical engineering information, including software requirements and design. The Assurance and Test Procedures contains all technical assurance information, including Test, Quality Assurance (QA), and Verification and Validation (V&V). The Management, Engineering, and Assurance Reports is the library and/or listing of all project reports.
NASA Technical Reports Server (NTRS)
1982-01-01
An effective data collection methodology for evaluating software development methodologies was applied to four different software development projects. Goals of the data collection included characterizing changes and errors, characterizing projects and programmers, identifying effective error detection and correction techniques, and investigating ripple effects. The data collected consisted of changes (including error corrections) made to the software after code was written and baselined, but before testing began. Data collection and validation were concurrent with software development. Changes reported were verified by interviews with programmers.
Space-Based Reconfigurable Software Defined Radio Test Bed Aboard International Space Station
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.; Lux, James P.
2014-01-01
The National Aeronautical and Space Administration (NASA) recently launched a new software defined radio research test bed to the International Space Station. The test bed, sponsored by the Space Communications and Navigation (SCaN) Office within NASA is referred to as the SCaN Testbed. The SCaN Testbed is a highly capable communications system, composed of three software defined radios, integrated into a flight system, and mounted to the truss of the International Space Station. Software defined radios offer the future promise of in-flight reconfigurability, autonomy, and eventually cognitive operation. The adoption of software defined radios offers space missions a new way to develop and operate space transceivers for communications and navigation. Reconfigurable or software defined radios with communications and navigation functions implemented in software or VHDL (Very High Speed Hardware Description Language) provide the capability to change the functionality of the radio during development or after launch. The ability to change the operating characteristics of a radio through software once deployed to space offers the flexibility to adapt to new science opportunities, recover from anomalies within the science payload or communication system, and potentially reduce development cost and risk by adapting generic space platforms to meet specific mission requirements. The software defined radios on the SCaN Testbed are each compliant to NASA's Space Telecommunications Radio System (STRS) Architecture. The STRS Architecture is an open, non-proprietary architecture that defines interfaces for the connections between radio components. It provides an operating environment to abstract the communication waveform application from the underlying platform specific hardware such as digital-to-analog converters, analog-to-digital converters, oscillators, RF attenuators, automatic gain control circuits, FPGAs, general-purpose processors, etc. and the interconnections among different radio components.
NASA Technical Reports Server (NTRS)
Stinnett, W. G.
1980-01-01
The modifications, additions, and testing results for a version of the Deep Space Station command software, generated for support of the Voyager Saturn encounter, are discussed. The software update requirements included efforts to: (1) recode portions of the software to permit recovery of approximately 2000 words of memory; (2) correct five Voyager Ground data System liens; (3) provide capability to automatically turn off the command processor assembly local printer during periods of low activity; and (4) correct anomalies existing in the software.
Software engineering project management - A state-of-the-art report
NASA Technical Reports Server (NTRS)
Thayer, R. H.; Lehman, J. H.
1977-01-01
The management of software engineering projects in the aerospace industry was investigated. The survey assessed such features as contract type, specification preparation techniques, software documentation required by customers, planning and cost-estimating, quality control, the use of advanced program practices, software tools and test procedures, the education levels of project managers, programmers and analysts, work assignment, automatic software monitoring capabilities, design and coding reviews, production times, success rates, and organizational structure of the projects.
Software control and system configuration management - A process that works
NASA Technical Reports Server (NTRS)
Petersen, K. L.; Flores, C., Jr.
1983-01-01
A comprehensive software control and system configuration management process for flight-crucial digital control systems of advanced aircraft has been developed and refined to insure efficient flight system development and safe flight operations. Because of the highly complex interactions among the hardware, software, and system elements of state-of-the-art digital flight control system designs, a systems-wide approach to configuration control and management has been used. Specific procedures are implemented to govern discrepancy reporting and reconciliation, software and hardware change control, systems verification and validation testing, and formal documentation requirements. An active and knowledgeable configuration control board reviews and approves all flight system configuration modifications and revalidation tests. This flexible process has proved effective during the development and flight testing of several research aircraft and remotely piloted research vehicles with digital flight control systems that ranged from relatively simple to highly complex, integrated mechanizations.
NASA Technical Reports Server (NTRS)
Briand, Lionel C.; Basili, Victor R.; Hetmanski, Christopher J.
1993-01-01
Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault frequency components so that testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents the Optimized Set Reduction approach for constructing such models, intended to fulfill specific software engineering needs. Our approach to classification is to measure the software system and build multivariate stochastic models for predicting high risk system components. We present experimental results obtained by classifying Ada components into two classes: is or is not likely to generate faults during system and acceptance test. Also, we evaluate the accuracy of the model and the insights it provides into the error making process.
Software control and system configuration management: A systems-wide approach
NASA Technical Reports Server (NTRS)
Petersen, K. L.; Flores, C., Jr.
1984-01-01
A comprehensive software control and system configuration management process for flight-crucial digital control systems of advanced aircraft has been developed and refined to insure efficient flight system development and safe flight operations. Because of the highly complex interactions among the hardware, software, and system elements of state-of-the-art digital flight control system designs, a systems-wide approach to configuration control and management has been used. Specific procedures are implemented to govern discrepancy reporting and reconciliation, software and hardware change control, systems verification and validation testing, and formal documentation requirements. An active and knowledgeable configuration control board reviews and approves all flight system configuration modifications and revalidation tests. This flexible process has proved effective during the development and flight testing of several research aircraft and remotely piloted research vehicles with digital flight control systems that ranged from relatively simple to highly complex, integrated mechanizations.
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.
1993-01-01
The Communication Protocol Software was developed at the NASA Lewis Research Center to support the Advanced Communications Technology Satellite High Burst Rate Link Evaluation Terminal (ACTS HBR-LET). The HBR-LET is an experimenters terminal to communicate with the ACTS for various experiments by government, university, and industry agencies. The Communication Protocol Software is one segment of the Control and Performance Monitor (C&PM) Software system of the HBR-LET. The Communication Protocol Software allows users to control and configure the Intermediate Frequency Switch Matrix (IFSM) on board the ACTS to yield a desired path through the spacecraft payload. Besides IFSM control, the C&PM Software System is also responsible for instrument control during HBR-LET experiments, uplink power control of the HBR-LET to demonstrate power augmentation during signal fade events, and data display. The Communication Protocol Software User's Guide, Version 1.0 (NASA CR-189162) outlines the commands and procedures to install and operate the Communication Protocol Software. Configuration files used to control the IFSM, operator commands, and error recovery procedures are discussed. The Communication Protocol Software Maintenance Manual, Version 1.0 (NASA CR-189163, to be published) is a programmer's guide to the Communication Protocol Software. This manual details the current implementation of the software from a technical perspective. Included is an overview of the Communication Protocol Software, computer algorithms, format representations, and computer hardware configuration. The Communication Protocol Software Test Plan (NASA CR-189164, to be published) provides a step-by-step procedure to verify the operation of the software. Included in the Test Plan is command transmission, telemetry reception, error detection, and error recovery procedures.
Software reliability perspectives
NASA Technical Reports Server (NTRS)
Wilson, Larry; Shen, Wenhui
1987-01-01
Software which is used in life critical functions must be known to be highly reliable before installation. This requires a strong testing program to estimate the reliability, since neither formal methods, software engineering nor fault tolerant methods can guarantee perfection. Prior to the final testing software goes through a debugging period and many models have been developed to try to estimate reliability from the debugging data. However, the existing models are poorly validated and often give poor performance. This paper emphasizes the fact that part of their failures can be attributed to the random nature of the debugging data given to these models as input, and it poses the problem of correcting this defect as an area of future research.
Code of Federal Regulations, 2010 CFR
2010-01-01
... performing the airflow test, measure ceiling fan power using a RMS sensor capable of measuring power with an accuracy of ±1 %. Prior to using the sensor and sensor software it has selected, the test laboratory shall verify performance of the sensor and sensor software. Measure power input at a point that includes all...
Burn Resuscitation Decision Support System (BRDSS)
2013-09-01
effective for burn care in the deployed and en route care settings. In this period, we completed Human Factors studies, hardware testing , software design ... designated U.S. Army Institute of Surgical Research (USAISR) clinical team. Phase 1 System Requirements and Software Development Arcos will draft a...airworthiness testing . The hardware finalists will be sent to U.S. Army Aeromedical Research Laboratory (USAARL) for critical airworthiness testing . Phase
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horowitz, Scott; Maguire, Jeff; Tabares-Velasco, Paulo Cesar
2016-08-01
This multiphase study involved comprehensive comparative testing of EnergyPlus and SEEM to determine the differences in energy consumption predictions between these two programs and to reconcile prioritized discrepancies through bug fixes, modeling improvements, and/or consistent inputs and assumptions.
Testing Evolutionary Hypotheses in the Classroom with MacClade Software.
ERIC Educational Resources Information Center
Codella, Sylvio G.
2002-01-01
Introduces MacClade which is a Macintosh-based software package that uses the techniques of cladistic analysis to explore evolutionary patterns. Describes a novel and effective exercise that allows undergraduate biology majors to test a hypothesis about behavioral evolution in insects. (Contains 13 references.) (Author/YDS)
Quality Assurance Testing of Version 1.3 of U.S. EPA Benchmark Dose Software (Presentation)
EPA benchmark dose software (BMDS) issued to evaluate chemical dose-response data in support of Agency risk assessments, and must therefore be dependable. Quality assurance testing methods developed for BMDS were designed to assess model dependability with respect to curve-fitt...
Using a Self-Administered Visual Basic Software Tool To Teach Psychological Concepts.
ERIC Educational Resources Information Center
Strang, Harold R.; Sullivan, Amie K.; Schoeny, Zahrl G.
2002-01-01
Introduces LearningLinks, a Visual Basic software tool that allows teachers to create individualized learning modules that use constructivist and behavioral learning principles. Describes field testing of undergraduates at the University of Virginia that tested a module designed to improve understanding of the psychological concepts of…
MFV-class: a multi-faceted visualization tool of object classes.
Zhang, Zhi-meng; Pan, Yun-he; Zhuang, Yue-ting
2004-11-01
Classes are key software components in an object-oriented software system. In many industrial OO software systems, there are some classes that have complicated structure and relationships. So in the processes of software maintenance, testing, software reengineering, software reuse and software restructure, it is a challenge for software engineers to understand these classes thoroughly. This paper proposes a class comprehension model based on constructivist learning theory, and implements a software visualization tool (MFV-Class) to help in the comprehension of a class. The tool provides multiple views of class to uncover manifold facets of class contents. It enables visualizing three object-oriented metrics of classes to help users focus on the understanding process. A case study was conducted to evaluate our approach and the toolkit.
Wolf Testing: Open Source Testing Software
NASA Astrophysics Data System (ADS)
Braasch, P.; Gay, P. L.
2004-12-01
Wolf Testing is software for easily creating and editing exams. Wolf Testing allows the user to create an exam from a database of questions, view it on screen, and easily print it along with the corresponding answer guide. The questions can be multiple choice, short answer, long answer, or true and false varieties. This software can be accessed securely from any location, allowing the user to easily create exams from home. New questions, which can include associated pictures, can be added through a web-interface. After adding in questions, they can be edited, deleted, or duplicated into multiple versions. Long-term test creation is simplified, as you are able to quickly see what questions you have asked in the past and insert them, with or without editing, into future tests. All tests are archived in the database. Written in PHP and MySQL, this software can be installed on any UNIX / Linux platform, including Macintosh OS X. The secure interface keeps students out, and allows you to decide who can create tests and who can edit information already in the database. Tests can be output as either html with pictures or rich text without pictures, and there are plans to add PDF and MS Word formats as well. We would like to thank Dr. Wolfgang Rueckner and the Harvard University Science Center for providing incentive to start this project, computers and resources to complete this project, and inspiration for the project's name. We would also like to thank Dr. Ronald Newburgh for his assistance in beta testing.
Software for an experimental air-ground data link : volume 1. functional description and flowcharts.
DOT National Transportation Integrated Search
1975-10-01
This report documents the complete software system developed for the Experimental Data Link System which was implementd for flight test during the Air-Ground Data Link Development Program. The software development is presented in three volumes as fol...
Earth Observing System/Advanced Microwave Sounding Unit-A (EOS/AMSU-A) software management plan
NASA Technical Reports Server (NTRS)
Schwantje, Robert
1994-01-01
This document defines the responsibilites for the management of the like-cycle development of the flight software installed in the AMSU-A instruments, and the ground support software used in the test and integration of the AMSU-A instruments.
NASA Technical Reports Server (NTRS)
Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron
1994-01-01
This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.
NASA Astrophysics Data System (ADS)
Kulas, M.; Borelli, Jose Luis; Gässler, Wolfgang; Peter, Diethard; Rabien, Sebastian; Orban de Xivry, Gilles; Busoni, Lorenzo; Bonaglia, Marco; Mazzoni, Tommaso; Rahmer, Gustavo
2014-07-01
Commissioning time for an instrument at an observatory is precious, especially the night time. Whenever astronomers come up with a software feature request or point out a software defect, the software engineers have the task to find a solution and implement it as fast as possible. In this project phase, the software engineers work under time pressure and stress to deliver a functional instrument control software (ICS). The shortness of development time during commissioning is a constraint for software engineering teams and applies to the ARGOS project as well. The goal of the ARGOS (Advanced Rayleigh guided Ground layer adaptive Optics System) project is the upgrade of the Large Binocular Telescope (LBT) with an adaptive optics (AO) system consisting of six Rayleigh laser guide stars and wavefront sensors. For developing the ICS, we used the technique Test- Driven Development (TDD) whose main rule demands that the programmer writes test code before production code. Thereby, TDD can yield a software system, that grows without defects and eases maintenance. Having applied TDD in a calm and relaxed environment like office and laboratory, the ARGOS team has profited from the benefits of TDD. Before the commissioning, we were worried that the time pressure in that tough project phase would force us to drop TDD because we would spend more time writing test code than it would be worth. Despite this concern at the beginning, we could keep TDD most of the time also in this project phase This report describes the practical application and performance of TDD including its benefits, limitations and problems during the ARGOS commissioning. Furthermore, it covers our experience with pair programming and continuous integration at the telescope.
[Quality assurance of a virtual simulation software: application to IMAgo and SIMAgo (ISOgray)].
Isambert, A; Beaudré, A; Ferreira, I; Lefkopoulos, D
2007-06-01
Virtual simulation process is often used to prepare three dimensional conformal radiation therapy treatments. As the quality of the treatment is widely dependent on this step, it is mandatory to perform extensive controls on this software before clinical use. The tests presented in this work have been carried out on the treatment planning system ISOgray (DOSIsoft), including the delineation module IMAgo and the virtual simulation module SIMAgo. According to our experience, the most relevant controls of international protocols have been selected. These tests mainly focused on measuring and delineation tools, virtual simulation functionalities, and have been performed with three phantoms: the Quasar Multi-Purpose Body Phantom, the Quasar MLC Beam Geometry Phantom (Modus Medical Devices Inc.) and a phantom developed at Hospital Tenon. No major issues have been identified while performing the tests. These controls have emphasized the necessity for the user to consider with a critical eye the results displayed by a virtual simulation software. The contrast of visualisation, the slice thickness, the calculation and display mode of 3D structures used by the software are many factors of uncertainties. A virtual simulation software quality assurance procedure has been written and applied on a set of CT images. Similar tests have to be performed periodically and at minimum at each change of major version.
A four-alternative forced choice (4AFC) software for observer performance evaluation in radiology
NASA Astrophysics Data System (ADS)
Zhang, Guozhi; Cockmartin, Lesley; Bosmans, Hilde
2016-03-01
Four-alternative forced choice (4AFC) test is a psychophysical method that can be adopted for observer performance evaluation in radiological studies. While the concept of this method is well established, difficulties to handle large image data, perform unbiased sampling, and keep track of the choice made by the observer have restricted its application in practice. In this work, we propose an easy-to-use software that can help perform 4AFC tests with DICOM images. The software suits for any experimental design that follows the 4AFC approach. It has a powerful image viewing system that favorably simulates the clinical reading environment. The graphical interface allows the observer to adjust various viewing parameters and perform the selection with very simple operations. The sampling process involved in 4AFC as well as the speed and accuracy of the choice made by the observer is precisely monitored in the background and can be easily exported for test analysis. The software has also a defensive mechanism for data management and operation control that minimizes the possibility of mistakes from user during the test. This software can largely facilitate the use of 4AFC approach in radiological observer studies and is expected to have widespread applicability.
Software framework for the upcoming MMT Observatory primary mirror re-aluminization
NASA Astrophysics Data System (ADS)
Gibson, J. Duane; Clark, Dusty; Porter, Dallan
2014-07-01
Details of the software framework for the upcoming in-situ re-aluminization of the 6.5m MMT Observatory (MMTO) primary mirror are presented. This framework includes: 1) a centralized key-value store and data structure server for data exchange between software modules, 2) a newly developed hardware-software interface for faster data sampling and better hardware control, 3) automated control algorithms that are based upon empirical testing, modeling, and simulation of the aluminization process, 4) re-engineered graphical user interfaces (GUI's) that use state-of-the-art web technologies, and 5) redundant relational databases for data logging. Redesign of the software framework has several objectives: 1) automated process control to provide more consistent and uniform mirror coatings, 2) optional manual control of the aluminization process, 3) modular design to allow flexibility in process control and software implementation, 4) faster data sampling and logging rates to better characterize the approximately 100-second aluminization event, and 5) synchronized "real-time" web application GUI's to provide all users with exactly the same data. The framework has been implemented as four modules interconnected by a data store/server. The four modules are integrated into two Linux system services that start automatically at boot-time and remain running at all times. Performance of the software framework is assessed through extensive testing within 2.0 meter and smaller coating chambers at the Sunnyside Test Facility. The redesigned software framework helps ensure that a better performing and longer lasting coating will be achieved during the re-aluminization of the MMTO primary mirror.
Automated unit-level testing with heuristic rules
NASA Technical Reports Server (NTRS)
Carlisle, W. Homer; Chang, Kai-Hsiung; Cross, James H.; Keleher, William; Shackelford, Keith
1990-01-01
Software testing plays a significant role in the development of complex software systems. Current testing methods generally require significant effort to generate meaningful test cases. The QUEST/Ada system is a prototype system designed using CLIPS to experiment with expert system based test case generation. The prototype is designed to test for condition coverage, and attempts to generate test cases to cover all feasible branches contained in an Ada program. This paper reports on heuristics sued by the system. These heuristics vary according to the amount of knowledge obtained by preprocessing and execution of the boolean conditions in the program.
Metamorphic Testing for Cybersecurity.
Chen, Tsong Yueh; Kuo, Fei-Ching; Ma, Wenjuan; Susilo, Willy; Towey, Dave; Voas, Jeffrey; Zhou, Zhi Quan
2016-06-01
Testing is a major approach for the detection of software defects, including vulnerabilities in security features. This article introduces metamorphic testing (MT), a relatively new testing method, and discusses how the new perspective of MT can help to conduct negative testing as well as to alleviate the oracle problem in the testing of security-related functionality and behavior. As demonstrated by the effectiveness of MT in detecting previously unknown bugs in real-world critical applications such as compilers and code obfuscators, we conclude that software testing of security-related features should be conducted from diverse perspectives in order to achieve greater cybersecurity.
Metamorphic Testing for Cybersecurity
Chen, Tsong Yueh; Kuo, Fei-Ching; Ma, Wenjuan; Susilo, Willy; Towey, Dave; Voas, Jeffrey
2016-01-01
Testing is a major approach for the detection of software defects, including vulnerabilities in security features. This article introduces metamorphic testing (MT), a relatively new testing method, and discusses how the new perspective of MT can help to conduct negative testing as well as to alleviate the oracle problem in the testing of security-related functionality and behavior. As demonstrated by the effectiveness of MT in detecting previously unknown bugs in real-world critical applications such as compilers and code obfuscators, we conclude that software testing of security-related features should be conducted from diverse perspectives in order to achieve greater cybersecurity. PMID:27559196
West, A G; Goldsmith, G R; Matimati, I; Dawson, T E
2011-08-30
Previous studies have demonstrated the potential for large errors to occur when analyzing waters containing organic contaminants using isotope ratio infrared spectroscopy (IRIS). In an attempt to address this problem, IRIS manufacturers now provide post-processing spectral analysis software capable of identifying samples with the types of spectral interference that compromises their stable isotope analysis. Here we report two independent tests of this post-processing spectral analysis software on two IRIS systems, OA-ICOS (Los Gatos Research Inc.) and WS-CRDS (Picarro Inc.). Following a similar methodology to a previous study, we cryogenically extracted plant leaf water and soil water and measured the δ(2)H and δ(18)O values of identical samples by isotope ratio mass spectrometry (IRMS) and IRIS. As an additional test, we analyzed plant stem waters and tap waters by IRMS and IRIS in an independent laboratory. For all tests we assumed that the IRMS value represented the "true" value against which we could compare the stable isotope results from the IRIS methods. Samples showing significant deviations from the IRMS value (>2σ) were considered to be contaminated and representative of spectral interference in the IRIS measurement. Over the two studies, 83% of plant species were considered contaminated on OA-ICOS and 58% on WS-CRDS. Post-analysis, spectra were analyzed using the manufacturer's spectral analysis software, in order to see if the software correctly identified contaminated samples. In our tests the software performed well, identifying all the samples with major errors. However, some false negatives indicate that user evaluation and testing of the software are necessary. Repeat sampling of plants showed considerable variation in the discrepancies between IRIS and IRMS. As such, we recommend that spectral analysis of IRIS data must be incorporated into standard post-processing routines. Furthermore, we suggest that the results from spectral analysis be included when reporting stable isotope data from IRIS. Copyright © 2011 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Budiman, Kholiq; Prahasto, Toni; Kusumawardhani, Amie
2018-02-01
This research has applied an integrated design and development of planning information system, which is been designed using Enterprise Architecture Planning. Frequent discrepancy between planning and realization of the budget that has been made, resulted in ineffective planning, is one of the reason for doing this research. The design using EAP aims to keep development aligned and in line with the strategic direction of the organization. In the practice, EAP is carried out in several stages of the planning initiation, identification and definition of business functions, proceeded with architectural design and EA implementation plan that has been built. In addition to the design of the Enterprise Architecture, this research carried out the implementation, and was tested by several methods of black box and white box. Black box testing method is used to test the fundamental aspects of the software, tested by two kinds of testing, first is using User Acceptance Testing and the second is using software functionality testing. White box testing method is used to test the effectiveness of the code in the software, tested using unit testing. Tests conducted using white box and black box on the integrated planning information system, is declared successful. Success in the software testing can not be ascertained if the software built has not shown any distinction from prior circumstance to the development of this integrated planning information system. For ensuring the success of this system implementation, the authors test consistency between the planning of data and the realization of prior-use of the information system, until after-use information system. This consistency test is done by reducing the time data of the planning and realization time. From the tabulated data, the planning information system that has been built reduces the difference between the planning time and the realization time, in which indicates that the planning information system can motivate the planner unit in realizing the budget that has been designed. It also proves that the value chain of the information planning system has brought implications for budget realization.
Space Telecommunications Radio System (STRS) Architecture Standard. Release 1.02.1
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.; Kacpura, Thomas J.; Handler, Louis M.; Hall, C. Steve; Mortensen, Dale J.; Johnson, Sandra K.; Briones, Janette C.; Nappier, Jennifer M.; Downey, Joseph A.; Lux, James P.
2012-01-01
This document contains the NASA architecture standard for software defined radios used in space- and ground-based platforms to enable commonality among radio developments to enhance capability and services while reducing mission and programmatic risk. Transceivers (or transponders) with functionality primarily defined in software (e.g., firmware) have the ability to change their functional behavior through software alone. This radio architecture standard offers value by employing common waveform software interfaces, method of instantiation, operation, and testing among different compliant hardware and software products. These common interfaces within the architecture abstract application software from the underlying hardware to enable technology insertion independently at either the software or hardware layer.
Manager's handbook for software development, revision 1
NASA Technical Reports Server (NTRS)
1990-01-01
Methods and aids for the management of software development projects are presented. The recommendations are based on analyses and experiences of the Software Engineering Laboratory (SEL) with flight dynamics software development. The management aspects of the following subjects are described: organizing the project, producing a development plan, estimating costs, scheduling, staffing, preparing deliverable documents, using management tools, monitoring the project, conducting reviews, auditing, testing, and certifying.
Developing Confidence Limits For Reliability Of Software
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.
1991-01-01
Technique developed for estimating reliability of software by use of Moranda geometric de-eutrophication model. Pivotal method enables straightforward construction of exact bounds with associated degree of statistical confidence about reliability of software. Confidence limits thus derived provide precise means of assessing quality of software. Limits take into account number of bugs found while testing and effects of sampling variation associated with random order of discovering bugs.
NASA Technical Reports Server (NTRS)
2004-01-01
Ribbons is a program developed at UAB used worldwide to graphically depict complicated protein structures in a simplified format. The program uses sophisticated computer systems to understand the implications of protein structures. The Influenza virus remains a major causative agent for a large number of deaths among the elderly and young children and huge economic losses due to illness. Finding a cure will have a general impact both on the basic research of viral pathologists of fast evolving infectious agents and clinical treatment of influenza virus infection. The reproduction process of all strains of influenza are dependent on the same enzyme neuraminidase. Shown here is a segmented representation of the neuraminidase inhibitor compound sitting inside a cave-like contour of the neuraminidase enzyme surface. This cave-like formation present in every neuraminidase enzyme is the active site crucial to the flu's ability to infect. The space-grown crystals of neuraminidase have provided significant new details about the three-dimensional characteristics of this active site thus allowing researchers to design drugs that fit tighter into the site. Principal Investigator: Dr. Larry DeLucas
MultitaskProtDB: a database of multitasking proteins.
Hernández, Sergio; Ferragut, Gabriela; Amela, Isaac; Perez-Pons, JosepAntoni; Piñol, Jaume; Mozo-Villarias, Angel; Cedano, Juan; Querol, Enrique
2014-01-01
We have compiled MultitaskProtDB, available online at http://wallace.uab.es/multitask, to provide a repository where the many multitasking proteins found in the literature can be stored. Multitasking or moonlighting is the capability of some proteins to execute two or more biological functions. Usually, multitasking proteins are experimentally revealed by serendipity. This ability of proteins to perform multitasking functions helps us to understand one of the ways used by cells to perform many complex functions with a limited number of genes. Even so, the study of this phenomenon is complex because, among other things, there is no database of moonlighting proteins. The existence of such a tool facilitates the collection and dissemination of these important data. This work reports the database, MultitaskProtDB, which is designed as a friendly user web page containing >288 multitasking proteins with their NCBI and UniProt accession numbers, canonical and additional biological functions, monomeric/oligomeric states, PDB codes when available and bibliographic references. This database also serves to gain insight into some characteristics of multitasking proteins such as frequencies of the different pairs of functions, phylogenetic conservation and so forth.
Software OT&E Guidelines. Volume 3. Software Maintainability Evaluator’s Handbook
1980-04-01
SOFTWARE OT&E " 1 GUIDELINES . VOLUME III SOFTWARE MAINTAINABILITY EVALUATOR’S HANDBOOK APRIL 1980 AIR FORCE TEST AND EVALUATION CENTER KIRTLAND AIR...FORCE BASE NEW MEXICO 87117 C-, -j AfTECP 800-3 AF’r...........3 ...... UNCLASSIFIED SECURITY CLASSIFICATION OF THIS PAGE (When D.. Entered) RE:PORT...c -. 5 TY!aJ0. PERIOD COVERED SOFTWARE OT& . GUIDELINES, Volume III .of five). -1 softare-R.aintainability Evaluator’s P-IEFnook’ 4ina. i 1980
Proposing an Evidence-Based Strategy for Software Requirements Engineering.
Lindoerfer, Doris; Mansmann, Ulrich
2016-01-01
This paper discusses an evidence-based approach to software requirements engineering. The approach is called evidence-based, since it uses publications on the specific problem as a surrogate for stakeholder interests, to formulate risks and testing experiences. This complements the idea that agile software development models are more relevant, in which requirements and solutions evolve through collaboration between self-organizing cross-functional teams. The strategy is exemplified and applied to the development of a Software Requirements list used to develop software systems for patient registries.
State-of-the-Art Resources (SOAR) for Software Vulnerability Detection, Test, and Evaluation
2014-07-01
preclude in-depth analysis, and widespread use of a Software -as-a- Service ( SaaS ) model that limits data availability and application to DoD systems...provide mobile application analysis using a Software - as-a- Service ( SaaS ) model. In this case, any software to be analyzed must be sent to the...tools are only available through a SaaS model. The widespread use of a Software -as-a- Service ( SaaS ) model as a sole evaluation model limits data
FRAMES-2.0 Software System: Frames 2.0 Pest Integration (F2PEST)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castleton, Karl J.; Meyer, Philip D.
2009-06-17
The implementation of the FRAMES 2.0 F2PEST module is described, including requirements, design, and specifications of the software. This module integrates the PEST parameter estimation software within the FRAMES 2.0 environmental modeling framework. A test case is presented.
A Method for Assessing the Accuracy of a Photogrammetry System for Precision Deployable Structures
NASA Technical Reports Server (NTRS)
Moore, Ashley
2005-01-01
The measurement techniques used to validate analytical models of large deployable structures are an integral Part of the technology development process and must be precise and accurate. Photogrammetry and videogrammetry are viable, accurate, and unobtrusive methods for measuring such large Structures. Photogrammetry uses Software to determine the three-dimensional position of a target using camera images. Videogrammetry is based on the same principle, except a series of timed images are analyzed. This work addresses the accuracy of a digital photogrammetry system used for measurement of large, deployable space structures at JPL. First, photogrammetry tests are performed on a precision space truss test article, and the images are processed using Photomodeler software. The accuracy of the Photomodeler results is determined through, comparison with measurements of the test article taken by an external testing group using the VSTARS photogrammetry system. These two measurements are then compared with Australis photogrammetry software that simulates a measurement test to predict its accuracy. The software is then used to study how particular factors, such as camera resolution and placement, affect the system accuracy to help design the setup for the videogrammetry system that will offer the highest level of accuracy for measurement of deploying structures.
cit: hypothesis testing software for mediation analysis in genomic applications.
Millstein, Joshua; Chen, Gary K; Breton, Carrie V
2016-08-01
The challenges of successfully applying causal inference methods include: (i) satisfying underlying assumptions, (ii) limitations in data/models accommodated by the software and (iii) low power of common multiple testing approaches. The causal inference test (CIT) is based on hypothesis testing rather than estimation, allowing the testable assumptions to be evaluated in the determination of statistical significance. A user-friendly software package provides P-values and optionally permutation-based FDR estimates (q-values) for potential mediators. It can handle single and multiple binary and continuous instrumental variables, binary or continuous outcome variables and adjustment covariates. Also, the permutation-based FDR option provides a non-parametric implementation. Simulation studies demonstrate the validity of the cit package and show a substantial advantage of permutation-based FDR over other common multiple testing strategies. The cit open-source R package is freely available from the CRAN website (https://cran.r-project.org/web/packages/cit/index.html) with embedded C ++ code that utilizes the GNU Scientific Library, also freely available (http://www.gnu.org/software/gsl/). joshua.millstein@usc.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Production Maintenance Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jason Gabler, David Skinner
2005-11-01
PMI is a XML framework for formulating tests of software and software environments which operate in a relatively push button manner, i.e., can be automated, and that provide results that are readily consumable/publishable via RSS. Insofar as possible the tests are carried out in manner congruent with real usage. PMI drives shell scripts via a perl program which is charge of timing, validating each test, and controlling the flow through sets of tests. Testing in PMI is built up hierarchically. A suite of tests may start by testing basic functionalities (file system is writable, compiler is found and functions, shellmore » environment behaves as expected, etc.) and work up to large more complicated activities (execution of parallel code, file transfers, etc.) At each step in this hierarchy a failure leads to generation of a text message or RSS that can be tagged as to who should be notified of the failure. There are two functionalities that PMI has been directed at. 1) regular and automated testing of multi user environments and 2) version-wise testing of new software releases prior to their deployment in a production mode.« less
SU-E-P-43: A Knowledge Based Approach to Guidelines for Software Safety
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salomons, G; Kelly, D
Purpose: In the fall of 2012, a survey was distributed to medical physicists across Canada. The survey asked the respondents to comment on various aspects of software development and use in their clinic. The survey revealed that most centers employ locally produced (in-house) software of some kind. The respondents also indicated an interest in having software guidelines, but cautioned that the realities of cancer clinics include variations, that preclude a simple solution. Traditional guidelines typically involve periodically repeating a set of prescribed tests with defined tolerance limits. However, applying a similar formula to software is problematic since it assumes thatmore » the users have a perfect knowledge of how and when to apply the software and that if the software operates correctly under one set of conditions it will operate correctly under all conditions Methods: In the approach presented here the personnel involved with the software are included as an integral part of the system. Activities performed to improve the safety of the software are done with both software and people in mind. A learning oriented approach is taken, following the premise that the best approach to safety is increasing the understanding of those associated with the use or development of the software. Results: The software guidance document is organized by areas of knowledge related to use and development of software. The categories include: knowledge of the underlying algorithm and its limitations; knowledge of the operation of the software, such as input values, parameters, error messages, and interpretation of output; and knowledge of the environment for the software including both data and users. Conclusion: We propose a new approach to developing guidelines which is based on acquiring knowledge-rather than performing tests. The ultimate goal is to provide robust software guidelines which will be practical and effective.« less
Reliability Testing Using the Vehicle Durability Simulator
2017-11-20
remote parameter control (RPC) software. The software is specifically designed for the data collection, analysis, and simulation processes outlined in...4516. 3. TOP 02-2-505 Inspection and Preliminary Operation of Vehicles, 4 February 1987. 4. Multi-Shaker Test and Control : Design , Test, and...currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 20-11-2017 2. REPORT
ERIC Educational Resources Information Center
Baki, Adnan; Kosa, Temel; Guven, Bulent
2011-01-01
The study compared the effects of dynamic geometry software and physical manipulatives on the spatial visualisation skills of first-year pre-service mathematics teachers. A pre- and post-test quasi-experimental design was used. The Purdue Spatial Visualisation Test (PSVT) was used for the pre- and post-test. There were three treatment groups. The…
Integrated Optical Design Analysis (IODA): New Test Data and Modeling Features
NASA Technical Reports Server (NTRS)
Moore, Jim; Troy, Ed; Patrick, Brian
2003-01-01
A general overview of the capabilities of the IODA ("Integrated Optical Design Analysis") exchange of data and modeling results between thermal, structures, optical design, and testing engineering disciplines. This presentation focuses on new features added to the software that allow measured test data to be imported into the IODA environment for post processing or comparisons with pretest model predictions. software is presented. IODA promotes efficient
Process Definition and Modeling Guidebook. Version 01.00.02
1992-12-01
material (and throughout the guidebook)process defnition is considered to be the act of representing the important characteristics of a process in a...characterized by software standards and guidelines, software inspections and reviews, and more formalized testing (including test plans, test sup- port tools...paper-based approach works well for training, examples, and possibly even small pilot projects and case studies. However, large projects will benefit from
Integrating Formal Methods and Testing 2002
NASA Technical Reports Server (NTRS)
Cukic, Bojan
2002-01-01
Traditionally, qualitative program verification methodologies and program testing are studied in separate research communities. None of them alone is powerful and practical enough to provide sufficient confidence in ultra-high reliability assessment when used exclusively. Significant advances can be made by accounting not only tho formal verification and program testing. but also the impact of many other standard V&V techniques, in a unified software reliability assessment framework. The first year of this research resulted in the statistical framework that, given the assumptions on the success of the qualitative V&V and QA procedures, significantly reduces the amount of testing needed to confidently assess reliability at so-called high and ultra-high levels (10-4 or higher). The coming years shall address the methodologies to realistically estimate the impacts of various V&V techniques to system reliability and include the impact of operational risk to reliability assessment. Combine formal correctness verification, process and product metrics, and other standard qualitative software assurance methods with statistical testing with the aim of gaining higher confidence in software reliability assessment for high-assurance applications. B) Quantify the impact of these methods on software reliability. C) Demonstrate that accounting for the effectiveness of these methods reduces the number of tests needed to attain certain confidence level. D) Quantify and justify the reliability estimate for systems developed using various methods.
Software fault tolerance using data diversity
NASA Technical Reports Server (NTRS)
Knight, John C.
1991-01-01
Research on data diversity is discussed. Data diversity relies on a different form of redundancy from existing approaches to software fault tolerance and is substantially less expensive to implement. Data diversity can also be applied to software testing and greatly facilitates the automation of testing. Up to now it has been explored both theoretically and in a pilot study, and has been shown to be a promising technique. The effectiveness of data diversity as an error detection mechanism and the application of data diversity to differential equation solvers are discussed.
SEDS1 mission software verification using a signal simulator
NASA Technical Reports Server (NTRS)
Pierson, William E.
1992-01-01
The first flight of the Small Expendable Deployer System (SEDS1) is schedule to fly as the secondary payload of a Delta 2 in March, 1993. The objective of the SEDS1 mission is to collect data to validate the concept of tethered satellite systems and to verify computer simulations used to predict their behavior. SEDS1 will deploy a 50 lb. instrumented satellite as an end mass using a 20 km tether. Langley Research Center is providing the end mass instrumentation, while the Marshall Space Flight Center is designing and building the deployer. The objective of the experiment is to test the SEDS design concept by demonstrating that the system will satisfactorily deploy the full 20 km tether without stopping prematurely, come to a smooth stop on the application of a brake, and cut the tether at the proper time after it swings to the local vertical. Also, SEDS1 will collect data which will be used to test the accuracy of tether dynamics models used to stimulate this type of deployment. The experiment will last about 1.5 hours and complete approximately 1.5 orbits. Radar tracking of the Delta II and end mass is planned. In addition, the SEDS1 on-board computer will continuously record, store, and transmit mission data over the Delta II S-band telemetry system. The Data System will count tether windings as the tether unwinds, log the times of each turn and other mission events, monitor tether tension, and record the temperature of system components. A summary of the measurements taken during the SEDS1 are shown. The Data System will also control the tether brake and cutter mechanisms. Preliminary versions of two major sections of the flight software, the data telemetry modules and the data collection modules, were developed and tested under the 1990 NASA/ASEE Summer Faculty Fellowship Program. To facilitate the debugging of these software modules, a prototype SEDS Data System was programmed to simulate turn count signals. During the 1991 summer program, the concept of simulating signals produced by the SEDS electronics systems and circuits was expanded and more precisely defined. During the 1992 summer program, the SEDS signal simulator was programmed to test the requirements of the SEDS Mission software, and this simulator will be used in the formal verification of the SEDS Mission Software. The formal test procedures specification was written which incorporates the use of the signal simulator to test the SEDS Mission Software and which incorporates procedures for testing the other major component of the SEDS software, the Monitor Software.
Robot-operated quality control station based on the UTT method
NASA Astrophysics Data System (ADS)
Burghardt, Andrzej; Kurc, Krzysztof; Szybicki, Dariusz; Muszyńska, Magdalena; Nawrocki, Jacek
2017-03-01
This paper presents a robotic test stand for the ultrasonic transmission tomography (UTT) inspection of stator vane thickness. The article presents the method of the test stand design in Autodesk Robot Structural Analysis Professional 2013 software suite. The performance of the designed test stand solution was simulated in the RobotStudio software suite. The operating principle of the test stand measurement system is presented with a specific focus on the measurement strategy. The results of actual wall thickness measurements performed on stator vanes are presented.
Round Robin Fatigue Crack Growth Testing Results
2006-11-01
testing was accomplished, in accordance with ASTM E647, using two different capacity SATEC frames-a 20 kip test frame for the 7075-T6 panels and a 55 kip...Equipment and Setup a. SATEC b. 20 kip (7075-T6); 55 kip (2024-T351) c. Test control hardware/software i. Hardware: Teststar Ilm ii. Software: Station...5c. WROGRK M UN LEMBERTNME 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS( ES ) S. PERFORMING ORGANIZATION REPORT NUMBER Center for A44rcraft
Mueller, Shane T.; Esposito, Alena G.
2015-01-01
We describe the Bivalent Shape Task (BST), software using the Psychology Experiment Building Language (PEBL), for testing of cognitive interference and the ability to suppress interference. The test is available via the GNU Public License, Version 3 (GPLv3), is freely modifiable, and has been tested on both children and adults and found to provide a simple and fast non-verbal measure of cognitive interference and suppression that requires no reading. PMID:26702358
Integrated optomechanical analysis and testing software development at MIT Lincoln Laboratory
NASA Astrophysics Data System (ADS)
Stoeckel, Gerhard P.; Doyle, Keith B.
2013-09-01
Advanced analytical software capabilities are being developed to advance the design of prototypical hardware in the Engineering Division at MIT Lincoln Laboratory. The current effort is focused on the integration of analysis tools tailored to the work flow, organizational structure, and current technology demands. These tools are being designed to provide superior insight into the interdisciplinary behavior of optical systems and enable rapid assessment and execution of design trades to optimize the design of optomechanical systems. The custom software architecture is designed to exploit and enhance the functionality of existing industry standard commercial software, provide a framework for centralizing internally developed tools, and deliver greater efficiency, productivity, and accuracy through standardization, automation, and integration. Specific efforts have included the development of a feature-rich software package for Structural-Thermal-Optical Performance (STOP) modeling, advanced Line Of Sight (LOS) jitter simulations, and improved integration of dynamic testing and structural modeling.
Evaluation of Visualization Software
NASA Technical Reports Server (NTRS)
Globus, Al; Uselton, Sam
1995-01-01
Visualization software is widely used in scientific and engineering research. But computed visualizations can be very misleading, and the errors are easy to miss. We feel that the software producing the visualizations must be thoroughly evaluated and the evaluation process as well as the results must be made available. Testing and evaluation of visualization software is not a trivial problem. Several methods used in testing other software are helpful, but these methods are (apparently) often not used. When they are used, the description and results are generally not available to the end user. Additional evaluation methods specific to visualization must also be developed. We present several useful approaches to evaluation, ranging from numerical analysis of mathematical portions of algorithms to measurement of human performance while using visualization systems. Along with this brief survey, we present arguments for the importance of evaluations and discussions of appropriate use of some methods.
REVEAL: Software Documentation and Platform Migration
NASA Technical Reports Server (NTRS)
Wilson, Michael A.; Veibell, Victoir T.; Freudinger, Lawrence C.
2008-01-01
The Research Environment for Vehicle Embedded Analysis on Linux (REVEAL) is reconfigurable data acquisition software designed for network-distributed test and measurement applications. In development since 2001, it has been successfully demonstrated in support of a number of actual missions within NASA s Suborbital Science Program. Improvements to software configuration control were needed to properly support both an ongoing transition to operational status and continued evolution of REVEAL capabilities. For this reason the project described in this report targets REVEAL software source documentation and deployment of the software on a small set of hardware platforms different from what is currently used in the baseline system implementation. This report specifically describes the actions taken over a ten week period by two undergraduate student interns and serves as a final report for that internship. The topics discussed include: the documentation of REVEAL source code; the migration of REVEAL to other platforms; and an end-to-end field test that successfully validates the efforts.
[Software for illustrating a cost-quality balance carried out by clinical laboratory practice].
Nishibori, Masahiro; Asayama, Hitoshi; Kimura, Satoshi; Takagi, Yasushi; Hagihara, Michio; Fujiwara, Mutsunori; Yoneyama, Akiko; Watanabe, Takashi
2010-09-01
We have no proper reference indicating the quality of clinical laboratory practice, which should clearly illustrates that better medical tests require more expenses. Japanese Society of Laboratory Medicine was concerned about recent difficult medical economy and issued a committee report proposing a guideline to evaluate the good laboratory practice. According to the guideline, we developed software that illustrate a cost-quality balance carried out by clinical laboratory practice. We encountered a number of controversial problems, for example, how to measure and weight each quality-related factor, how to calculate costs of a laboratory test and how to consider characteristics of a clinical laboratory. Consequently we finished only prototype software within the given period and the budget. In this paper, software implementation of the guideline and the above-mentioned problems are summarized. Aiming to stimulate these discussions, the operative software will be put on the Society's homepage for trial
SimulCAT: Windows Software for Simulating Computerized Adaptive Test Administration
ERIC Educational Resources Information Center
Han, Kyung T.
2012-01-01
Most, if not all, computerized adaptive testing (CAT) programs use simulation techniques to develop and evaluate CAT program administration and operations, but such simulation tools are rarely available to the public. Up to now, several software tools have been available to conduct CAT simulations for research purposes; however, these existing…
A Review of DIMPACK Version 1.0: Conditional Covariance-Based Test Dimensionality Analysis Package
ERIC Educational Resources Information Center
Deng, Nina; Han, Kyung T.; Hambleton, Ronald K.
2013-01-01
DIMPACK Version 1.0 for assessing test dimensionality based on a nonparametric conditional covariance approach is reviewed. This software was originally distributed by Assessment Systems Corporation and now can be freely accessed online. The software consists of Windows-based interfaces of three components: DIMTEST, DETECT, and CCPROX/HAC, which…
Man-computer Inactive Data Access System (McIDAS). [design, development, fabrication, and testing
NASA Technical Reports Server (NTRS)
1973-01-01
A technical description is given of the effort to design, develop, fabricate, and test the two dimensional data processing system, McIDAS. The system has three basic sections: an access and data archive section, a control section, and a display section. Areas reported include hardware, system software, and applications software.
NASA Tech Briefs, October 2003
NASA Technical Reports Server (NTRS)
2003-01-01
Topics covered include: Cryogenic Temperature-Gradient Foam/Substrate Tensile Tester; Flight Test of an Intelligent Flight-Control System; Slat Heater Boxes for Thermal Vacuum Testing; System for Testing Thermal Insulation of Pipes; Electrical-Impedance-Based Ice-Thickness Gauges; Simulation System for Training in Laparoscopic Surgery; Flasher Powered by Photovoltaic Cells and Ultracapacitors; Improved Autoassociative Neural Networks; Toroidal-Core Microinductors Biased by Permanent Magnets; Using Correlated Photons to Suppress Background Noise; Atmospheric-Fade-Tolerant Tracking and Pointing in Wireless Optical Communication; Curved Focal-Plane Arrays Using Back-Illuminated High-Purity Photodetectors; Software for Displaying Data from Planetary Rovers; Software for Refining or Coarsening Computational Grids; Software for Diagnosis of Multiple Coordinated Spacecraft; Software Helps Retrieve Information Relevant to the User; Software for Simulating a Complex Robot; Software for Planning Scientific Activities on Mars; Software for Training in Pre-College Mathematics; Switching and Rectification in Carbon-Nanotube Junctions; Scandia-and-Yttria-Stabilized Zirconia for Thermal Barriers; Environmentally Safer, Less Toxic Fire-Extinguishing Agents; Multiaxial Temperature- and Time-Dependent Failure Model; Cloverleaf Vibratory Microgyroscope with Integrated Post; Single-Vector Calibration of Wind-Tunnel Force Balances; Microgyroscope with Vibrating Post as Rotation Transducer; Continuous Tuning and Calibration of Vibratory Gyroscopes; Compact, Pneumatically Actuated Filter Shuttle; Improved Bearingless Switched-Reluctance Motor; Fluorescent Quantum Dots for Biological Labeling; Growing Three-Dimensional Corneal Tissue in a Bioreactor; Scanning Tunneling Optical Resonance Microscopy; The Micro-Arcsecond Metrology Testbed; Detecting Moving Targets by Use of Soliton Resonances; and Finite-Element Methods for Real-Time Simulation of Surgery.