Speech coding at low to medium bit rates
NASA Astrophysics Data System (ADS)
Leblanc, Wilfred Paul
1992-09-01
Improved search techniques coupled with improved codebook design methodologies are proposed to improve the performance of conventional code-excited linear predictive coders for speech. Improved methods for quantizing the short term filter are developed by employing a tree search algorithm and joint codebook design to multistage vector quantization. Joint codebook design procedures are developed to design locally optimal multistage codebooks. Weighting during centroid computation is introduced to improve the outlier performance of the multistage vector quantizer. Multistage vector quantization is shown to be both robust against input characteristics and in the presence of channel errors. Spectral distortions of about 1 dB are obtained at rates of 22-28 bits/frame. Structured codebook design procedures for excitation in code-excited linear predictive coders are compared to general codebook design procedures. Little is lost using significant structure in the excitation codebooks while greatly reducing the search complexity. Sparse multistage configurations are proposed for reducing computational complexity and memory size. Improved search procedures are applied to code-excited linear prediction which attempt joint optimization of the short term filter, the adaptive codebook, and the excitation. Improvements in signal to noise ratio of 1-2 dB are realized in practice.
Structured codebook design in CELP
NASA Technical Reports Server (NTRS)
Leblanc, W. P.; Mahmoud, S. A.
1990-01-01
Codebook Excited Linear Protection (CELP) is a popular analysis by synthesis technique for quantizing speech at bit rates from 4 to 6 kbps. Codebook design techniques to date have been largely based on either random (often Gaussian) codebooks, or on known binary or ternary codes which efficiently map the space of (assumed white) excitation codevectors. It has been shown that by introducing symmetries into the codebook, good complexity reduction can be realized with only marginal decrease in performance. Codebook design algorithms are considered for a wide range of structured codebooks.
Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design
Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco
2016-01-01
The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms. PMID:27886061
Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design.
Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco
2016-11-23
The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms.
Vector excitation speech or audio coder for transmission or storage
NASA Technical Reports Server (NTRS)
Davidson, Grant (Inventor); Gersho, Allen (Inventor)
1989-01-01
A vector excitation coder compresses vectors by using an optimum codebook designed off line, using an initial arbitrary codebook and a set of speech training vectors exploiting codevector sparsity (i.e., by making zero all but a selected number of samples of lowest amplitude in each of N codebook vectors). A fast-search method selects a number N.sub.c of good excitation vectors from the codebook, where N.sub.c is much smaller tha ORIGIN OF INVENTION The invention described herein was made in the performance of work under a NASA contract, and is subject to the provisions of Public Law 96-517 (35 USC 202) under which the inventors were granted a request to retain title.
A recursive technique for adaptive vector quantization
NASA Technical Reports Server (NTRS)
Lindsay, Robert A.
1989-01-01
Vector Quantization (VQ) is fast becoming an accepted, if not preferred method for image compression. The VQ performs well when compressing all types of imagery including Video, Electro-Optical (EO), Infrared (IR), Synthetic Aperture Radar (SAR), Multi-Spectral (MS), and digital map data. The only requirement is to change the codebook to switch the compressor from one image sensor to another. There are several approaches for designing codebooks for a vector quantizer. Adaptive Vector Quantization is a procedure that simultaneously designs codebooks as the data is being encoded or quantized. This is done by computing the centroid as a recursive moving average where the centroids move after every vector is encoded. When computing the centroid of a fixed set of vectors the resultant centroid is identical to the previous centroid calculation. This method of centroid calculation can be easily combined with VQ encoding techniques. The defined quantizer changes after every encoded vector by recursively updating the centroid of minimum distance which is the selected by the encoder. Since the quantizer is changing definition or states after every encoded vector, the decoder must now receive updates to the codebook. This is done as side information by multiplexing bits into the compressed source data.
PCA-LBG-based algorithms for VQ codebook generation
NASA Astrophysics Data System (ADS)
Tsai, Jinn-Tsong; Yang, Po-Yuan
2015-04-01
Vector quantisation (VQ) codebooks are generated by combining principal component analysis (PCA) algorithms with Linde-Buzo-Gray (LBG) algorithms. All training vectors are grouped according to the projected values of the principal components. The PCA-LBG-based algorithms include (1) PCA-LBG-Median, which selects the median vector of each group, (2) PCA-LBG-Centroid, which adopts the centroid vector of each group, and (3) PCA-LBG-Random, which randomly selects a vector of each group. The LBG algorithm finds a codebook based on the better vectors sent to an initial codebook by the PCA. The PCA performs an orthogonal transformation to convert a set of potentially correlated variables into a set of variables that are not linearly correlated. Because the orthogonal transformation efficiently distinguishes test image vectors, the proposed PCA-LBG-based algorithm is expected to outperform conventional algorithms in designing VQ codebooks. The experimental results confirm that the proposed PCA-LBG-based algorithms indeed obtain better results compared to existing methods reported in the literature.
Orthogonal Array Testing for Transmit Precoding based Codebooks in Space Shift Keying Systems
NASA Astrophysics Data System (ADS)
Al-Ansi, Mohammed; Alwee Aljunid, Syed; Sourour, Essam; Mat Safar, Anuar; Rashidi, C. B. M.
2018-03-01
In Space Shift Keying (SSK) systems, transmit precoding based codebook approaches have been proposed to improve the performance in limited feedback channels. The receiver performs an exhaustive search in a predefined Full-Combination (FC) codebook to select the optimal codeword that maximizes the Minimum Euclidean Distance (MED) between the received constellations. This research aims to reduce the codebook size with the purpose of minimizing the selection time and the number of feedback bits. Therefore, we propose to construct the codebooks based on Orthogonal Array Testing (OAT) methods due to their powerful inherent properties. These methods allow to acquire a short codebook where the codewords are sufficient to cover almost all the possible effects included in the FC codebook. Numerical results show the effectiveness of the proposed OAT codebooks in terms of the system performance and complexity.
Recursive optimal pruning with applications to tree structured vector quantizers
NASA Technical Reports Server (NTRS)
Kiang, Shei-Zein; Baker, Richard L.; Sullivan, Gary J.; Chiu, Chung-Yen
1992-01-01
A pruning algorithm of Chou et al. (1989) for designing optimal tree structures identifies only those codebooks which lie on the convex hull of the original codebook's operational distortion rate function. The authors introduce a modified version of the original algorithm, which identifies a large number of codebooks having minimum average distortion, under the constraint that, in each step, only modes having no descendents are removed from the tree. All codebooks generated by the original algorithm are also generated by this algorithm. The new algorithm generates a much larger number of codebooks in the middle- and low-rate regions. The additional codebooks permit operation near the codebook's operational distortion rate function without time sharing by choosing from the increased number of available bit rates. Despite the statistical mismatch which occurs when coding data outside the training sequence, these pruned codebooks retain their performance advantage over full search vector quantizers (VQs) for a large range of rates.
van Gemert, Jan C; Veenman, Cor J; Smeulders, Arnold W M; Geusebroek, Jan-Mark
2010-07-01
This paper studies automatic image classification by modeling soft assignment in the popular codebook model. The codebook model describes an image as a bag of discrete visual words selected from a vocabulary, where the frequency distributions of visual words in an image allow classification. One inherent component of the codebook model is the assignment of discrete visual words to continuous image features. Despite the clear mismatch of this hard assignment with the nature of continuous features, the approach has been successfully applied for some years. In this paper, we investigate four types of soft assignment of visual words to image features. We demonstrate that explicitly modeling visual word assignment ambiguity improves classification performance compared to the hard assignment of the traditional codebook model. The traditional codebook model is compared against our method for five well-known data sets: 15 natural scenes, Caltech-101, Caltech-256, and Pascal VOC 2007/2008. We demonstrate that large codebook vocabulary sizes completely deteriorate the performance of the traditional model, whereas the proposed model performs consistently. Moreover, we show that our method profits in high-dimensional feature spaces and reaps higher benefits when increasing the number of image categories.
Image coding using entropy-constrained residual vector quantization
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.
1993-01-01
The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates.
Developing Codebooks as a New Tool to Analyze Students' ePortfolios
ERIC Educational Resources Information Center
Impedovo, Maria Antonietta; Ritella, Giuseppe; Ligorio, Maria Beatrice
2013-01-01
This paper describes a three-step method for the construction of codebooks meant for analyzing ePortfolio content. The first step produces a prototype based on qualitative analysis of very different ePortfolios from the same course. During the second step, the initial version of the codebook is tested on a larger sample and subsequently revised.…
Ice Shape Characterization Using Self-Organizing Maps
NASA Technical Reports Server (NTRS)
McClain, Stephen T.; Tino, Peter; Kreeger, Richard E.
2011-01-01
A method for characterizing ice shapes using a self-organizing map (SOM) technique is presented. Self-organizing maps are neural-network techniques for representing noisy, multi-dimensional data aligned along a lower-dimensional and possibly nonlinear manifold. For a large set of noisy data, each element of a finite set of codebook vectors is iteratively moved in the direction of the data closest to the winner codebook vector. Through successive iterations, the codebook vectors begin to align with the trends of the higher-dimensional data. In information processing, the intent of SOM methods is to transmit the codebook vectors, which contains far fewer elements and requires much less memory or bandwidth, than the original noisy data set. When applied to airfoil ice accretion shapes, the properties of the codebook vectors and the statistical nature of the SOM methods allows for a quantitative comparison of experimentally measured mean or average ice shapes to ice shapes predicted using computer codes such as LEWICE. The nature of the codebook vectors also enables grid generation and surface roughness descriptions for use with the discrete-element roughness approach. In the present study, SOM characterizations are applied to a rime ice shape, a glaze ice shape at an angle of attack, a bi-modal glaze ice shape, and a multi-horn glaze ice shape. Improvements and future explorations will be discussed.
A hierarchical word-merging algorithm with class separability measure.
Wang, Lei; Zhou, Luping; Shen, Chunhua; Liu, Lingqiao; Liu, Huan
2014-03-01
In image recognition with the bag-of-features model, a small-sized visual codebook is usually preferred to obtain a low-dimensional histogram representation and high computational efficiency. Such a visual codebook has to be discriminative enough to achieve excellent recognition performance. To create a compact and discriminative codebook, in this paper we propose to merge the visual words in a large-sized initial codebook by maximally preserving class separability. We first show that this results in a difficult optimization problem. To deal with this situation, we devise a suboptimal but very efficient hierarchical word-merging algorithm, which optimally merges two words at each level of the hierarchy. By exploiting the characteristics of the class separability measure and designing a novel indexing structure, the proposed algorithm can hierarchically merge 10,000 visual words down to two words in merely 90 seconds. Also, to show the properties of the proposed algorithm and reveal its advantages, we conduct detailed theoretical analysis to compare it with another hierarchical word-merging algorithm that maximally preserves mutual information, obtaining interesting findings. Experimental studies are conducted to verify the effectiveness of the proposed algorithm on multiple benchmark data sets. As shown, it can efficiently produce more compact and discriminative codebooks than the state-of-the-art hierarchical word-merging algorithms, especially when the size of the codebook is significantly reduced.
A novel unsupervised spike sorting algorithm for intracranial EEG.
Yadav, R; Shah, A K; Loeb, J A; Swamy, M N S; Agarwal, R
2011-01-01
This paper presents a novel, unsupervised spike classification algorithm for intracranial EEG. The method combines template matching and principal component analysis (PCA) for building a dynamic patient-specific codebook without a priori knowledge of the spike waveforms. The problem of misclassification due to overlapping classes is resolved by identifying similar classes in the codebook using hierarchical clustering. Cluster quality is visually assessed by projecting inter- and intra-clusters onto a 3D plot. Intracranial EEG from 5 patients was utilized to optimize the algorithm. The resulting codebook retains 82.1% of the detected spikes in non-overlapping and disjoint clusters. Initial results suggest a definite role of this method for both rapid review and quantitation of interictal spikes that could enhance both clinical treatment and research studies on epileptic patients.
Image Coding Based on Address Vector Quantization.
NASA Astrophysics Data System (ADS)
Feng, Yushu
Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing Adaptive VQ Technique" is presented. In addition to chapters 2 through 6 which report on new work, this dissertation includes one chapter (chapter 1) and part of chapter 2 which review previous work on VQ and image coding, respectively. Finally, a short discussion of directions for further research is presented in conclusion.
NASA Astrophysics Data System (ADS)
Tsai, Jinn-Tsong; Chou, Ping-Yi; Chou, Jyh-Horng
2015-11-01
The aim of this study is to generate vector quantisation (VQ) codebooks by integrating principle component analysis (PCA) algorithm, Linde-Buzo-Gray (LBG) algorithm, and evolutionary algorithms (EAs). The EAs include genetic algorithm (GA), particle swarm optimisation (PSO), honey bee mating optimisation (HBMO), and firefly algorithm (FF). The study is to provide performance comparisons between PCA-EA-LBG and PCA-LBG-EA approaches. The PCA-EA-LBG approaches contain PCA-GA-LBG, PCA-PSO-LBG, PCA-HBMO-LBG, and PCA-FF-LBG, while the PCA-LBG-EA approaches contain PCA-LBG, PCA-LBG-GA, PCA-LBG-PSO, PCA-LBG-HBMO, and PCA-LBG-FF. All training vectors of test images are grouped according to PCA. The PCA-EA-LBG used the vectors grouped by PCA as initial individuals, and the best solution gained by the EAs was given for LBG to discover a codebook. The PCA-LBG approach is to use the PCA to select vectors as initial individuals for LBG to find a codebook. The PCA-LBG-EA used the final result of PCA-LBG as an initial individual for EAs to find a codebook. The search schemes in PCA-EA-LBG first used global search and then applied local search skill, while in PCA-LBG-EA first used local search and then employed global search skill. The results verify that the PCA-EA-LBG indeed gain superior results compared to the PCA-LBG-EA, because the PCA-EA-LBG explores a global area to find a solution, and then exploits a better one from the local area of the solution. Furthermore the proposed PCA-EA-LBG approaches in designing VQ codebooks outperform existing approaches shown in the literature.
Development of a researcher codebook for use in evaluating social networking site profiles.
Moreno, Megan A; Egan, Katie G; Brockman, Libby
2011-07-01
Social networking sites (SNSs) are immensely popular and allow for the display of personal information, including references to health behaviors. Evaluating displayed content on an SNS for research purposes requires a systematic approach and a precise data collection instrument. The purpose of this article is to describe one approach to the development of a research codebook so that others may develop and test their own codebooks for use in SNS research. Our SNS research codebook began on the basis of health behavior theory and clinical criteria. Key elements in the codebook developmental process included an iterative team approach and an emphasis on confidentiality. Codebook successes include consistently high inter-rater reliability. Challenges include time investment in coder training and SNS server changes. We hope that this article will provide detailed information about one systematic approach to codebook development so that other researchers may use this structure to develop and test their own codebooks for use in SNS research. Copyright © 2011 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Daher, H.; Gaceb, D.; Eglin, V.; Bres, S.; Vincent, N.
2012-01-01
We present in this paper a feature selection and weighting method for medieval handwriting images that relies on codebooks of shapes of small strokes of characters (graphemes that are issued from the decomposition of manuscripts). These codebooks are important to simplify the automation of the analysis, the manuscripts transcription and the recognition of styles or writers. Our approach provides a precise features weighting by genetic algorithms and a highperformance methodology for the categorization of the shapes of graphemes by using graph coloring into codebooks which are applied in turn on CBIR (Content Based Image Retrieval) in a mixed handwriting database containing different pages from different writers, periods of the history and quality. We show how the coupling of these two mechanisms 'features weighting - graphemes classification' can offer a better separation of the forms to be categorized by exploiting their grapho-morphological, their density and their significant orientations particularities.
Cross-entropy embedding of high-dimensional data using the neural gas model.
Estévez, Pablo A; Figueroa, Cristián J; Saito, Kazumi
2005-01-01
A cross-entropy approach to mapping high-dimensional data into a low-dimensional space embedding is presented. The method allows to project simultaneously the input data and the codebook vectors, obtained with the Neural Gas (NG) quantizer algorithm, into a low-dimensional output space. The aim of this approach is to preserve the relationship defined by the NG neighborhood function for each pair of input and codebook vectors. A cost function based on the cross-entropy between input and output probabilities is minimized by using a Newton-Raphson method. The new approach is compared with Sammon's non-linear mapping (NLM) and the hierarchical approach of combining a vector quantizer such as the self-organizing feature map (SOM) or NG with the NLM recall algorithm. In comparison with these techniques, our method delivers a clear visualization of both data points and codebooks, and it achieves a better mapping quality in terms of the topology preservation measure q(m).
Office of Workers’ Compensation Programs (OWCP). Data Codebook. Version 1.0
1993-12-01
Section 4. OWCP Data Codebook 4.1 Codebook Description ........................... 5 4.2 Codebook Column WHading Defnitions ............... 5 4.3 Data...OWCP (EARLY-REF) First character: variable. It was originally used T = Test group case between 1987 and 1990 in a C = Control group case study done...Nondestructive testing 4255 Fuel distribution system mechanic 3707 Metalizing 4301 Miscellaneous pliable materials work 3708 Metal process working 4351
Vector adaptive predictive coder for speech and audio
NASA Technical Reports Server (NTRS)
Chen, Juin-Hwey (Inventor); Gersho, Allen (Inventor)
1990-01-01
A real-time vector adaptive predictive coder which approximates each vector of K speech samples by using each of M fixed vectors in a first codebook to excite a time-varying synthesis filter and picking the vector that minimizes distortion. Predictive analysis for each frame determines parameters used for computing from vectors in the first codebook zero-state response vectors that are stored at the same address (index) in a second codebook. Encoding of input speech vectors s.sub.n is then carried out using the second codebook. When the vector that minimizes distortion is found, its index is transmitted to a decoder which has a codebook identical to the first codebook of the decoder. There the index is used to read out a vector that is used to synthesize an output speech vector s.sub.n. The parameters used in the encoder are quantized, for example by using a table, and the indices are transmitted to the decoder where they are decoded to specify transfer characteristics of filters used in producing the vector s.sub.n from the receiver codebook vector selected by the vector index transmitted.
Vector Quantization Algorithm Based on Associative Memories
NASA Astrophysics Data System (ADS)
Guzmán, Enrique; Pogrebnyak, Oleksiy; Yáñez, Cornelio; Manrique, Pablo
This paper presents a vector quantization algorithm for image compression based on extended associative memories. The proposed algorithm is divided in two stages. First, an associative network is generated applying the learning phase of the extended associative memories between a codebook generated by the LBG algorithm and a training set. This associative network is named EAM-codebook and represents a new codebook which is used in the next stage. The EAM-codebook establishes a relation between training set and the LBG codebook. Second, the vector quantization process is performed by means of the recalling stage of EAM using as associative memory the EAM-codebook. This process generates a set of the class indices to which each input vector belongs. With respect to the LBG algorithm, the main advantages offered by the proposed algorithm is high processing speed and low demand of resources (system memory); results of image compression and quality are presented.
Model-based VQ for image data archival, retrieval and distribution
NASA Technical Reports Server (NTRS)
Manohar, Mareboyana; Tilton, James C.
1995-01-01
An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of Vector Quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and Human Visual System (HVS) models. The error model assumed is the Laplacian distribution with mean, lambda-computed from a sample of the input image. A Laplacian distribution with mean, lambda, is generated with uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produce the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, lambda, that is included in the coded file to repeat the codebook generation process for decoding.
A VLSI chip set for real time vector quantization of image sequences
NASA Technical Reports Server (NTRS)
Baker, Richard L.
1989-01-01
The architecture and implementation of a VLSI chip set that vector quantizes (VQ) image sequences in real time is described. The chip set forms a programmable Single-Instruction, Multiple-Data (SIMD) machine which can implement various vector quantization encoding structures. Its VQ codebook may contain unlimited number of codevectors, N, having dimension up to K = 64. Under a weighted least squared error criterion, the engine locates at video rates the best code vector in full-searched or large tree searched VQ codebooks. The ability to manipulate tree structured codebooks, coupled with parallelism and pipelining, permits searches in as short as O (log N) cycles. A full codebook search results in O(N) performance, compared to O(KN) for a Single-Instruction, Single-Data (SISD) machine. With this VLSI chip set, an entire video code can be built on a single board that permits realtime experimentation with very large codebooks.
ERIC Educational Resources Information Center
National Center for Education Statistics (ED), Washington, DC.
This CD-ROM contains a separate electronic codebook for each of the following National Center for Education Statistics data sets: (1) B94, Baccalaureate and Beyond 1993-94 (restricted); (2) B97, Baccalaureate and Beyond 1993-97 (restricted); (3) BP4, Beginning Postsecondary Students 1990-94 (restricted); (4) FAC, 1992-93 National Student of…
Assessment of Ice Shape Roughness Using a Self-Orgainizing Map Approach
NASA Technical Reports Server (NTRS)
Mcclain, Stephen T.; Kreeger, Richard E.
2013-01-01
Self-organizing maps are neural-network techniques for representing noisy, multidimensional data aligned along a lower-dimensional and nonlinear manifold. For a large set of noisy data, each element of a finite set of codebook vectors is iteratively moved in the direction of the data closest to the winner codebook vector. Through successive iterations, the codebook vectors begin to align with the trends of the higher-dimensional data. Prior investigations of ice shapes have focused on using self-organizing maps to characterize mean ice forms. The Icing Research Branch has recently acquired a high resolution three dimensional scanner system capable of resolving ice shape surface roughness. A method is presented for the evaluation of surface roughness variations using high-resolution surface scans based on a self-organizing map representation of the mean ice shape. The new method is demonstrated for 1) an 18-in. NACA 23012 airfoil 2 AOA just after the initial ice coverage of the leading 5 of the suction surface of the airfoil, 2) a 21-in. NACA 0012 at 0AOA following coverage of the leading 10 of the airfoil surface, and 3) a cold-soaked 21-in.NACA 0012 airfoil without ice. The SOM method resulted in descriptions of the statistical coverage limits and a quantitative representation of early stages of ice roughness formation on the airfoils. Limitations of the SOM method are explored, and the uncertainty limits of the method are investigated using the non-iced NACA 0012 airfoil measurements.
High-performance ultra-low power VLSI analog processor for data compression
NASA Technical Reports Server (NTRS)
Tawel, Raoul (Inventor)
1996-01-01
An apparatus for data compression employing a parallel analog processor. The apparatus includes an array of processor cells with N columns and M rows wherein the processor cells have an input device, memory device, and processor device. The input device is used for inputting a series of input vectors. Each input vector is simultaneously input into each column of the array of processor cells in a pre-determined sequential order. An input vector is made up of M components, ones of which are input into ones of M processor cells making up a column of the array. The memory device is used for providing ones of M components of a codebook vector to ones of the processor cells making up a column of the array. A different codebook vector is provided to each of the N columns of the array. The processor device is used for simultaneously comparing the components of each input vector to corresponding components of each codebook vector, and for outputting a signal representative of the closeness between the compared vector components. A combination device is used to combine the signal output from each processor cell in each column of the array and to output a combined signal. A closeness determination device is then used for determining which codebook vector is closest to an input vector from the combined signals, and for outputting a codebook vector index indicating which of the N codebook vectors was the closest to each input vector input into the array.
Reflective Course Construction: An Analysis of Student Feedback and Its Role in Curricular Design
ERIC Educational Resources Information Center
Mitchell, Erik
2013-01-01
This study uses formal and informal student feedback as a source for understanding the impact of experimental course elements. Responses were used to develop a codebook, which was then applied to the entire dataset. The results inform our understanding of student's conceptions of professional identity, learning styles and curriculum design.…
Musical sound analysis/synthesis using vector-quantized time-varying spectra
NASA Astrophysics Data System (ADS)
Ehmann, Andreas F.; Beauchamp, James W.
2002-11-01
A fundamental goal of computer music sound synthesis is accurate, yet efficient resynthesis of musical sounds, with the possibility of extending the synthesis into new territories using control of perceptually intuitive parameters. A data clustering technique known as vector quantization (VQ) is used to extract a globally optimum set of representative spectra from phase vocoder analyses of instrument tones. This set of spectra, called a Codebook, is used for sinusoidal additive synthesis or, more efficiently, for wavetable synthesis. Instantaneous spectra are synthesized by first determining the Codebook indices corresponding to the best least-squares matches to the original time-varying spectrum. Spectral index versus time functions are then smoothed, and interpolation is employed to provide smooth transitions between Codebook spectra. Furthermore, spectral frames are pre-flattened and their slope, or tilt, extracted before clustering is applied. This allows spectral tilt, closely related to the perceptual parameter ''brightness,'' to be independently controlled during synthesis. The result is a highly compressed format consisting of the Codebook spectra and time-varying tilt, amplitude, and Codebook index parameters. This technique has been applied to a variety of harmonic musical instrument sounds with the resulting resynthesized tones providing good matches to the originals.
The Introductory Sociology Survey
ERIC Educational Resources Information Center
Best, Joel
1977-01-01
The Introductory Sociology Survey (ISS) is designed to teach introductory students basic skills in developing causal arguments and in using a computerized statistical package to analyze survey data. Students are given codebooks for survey data and asked to write a brief paper predicting the relationship between at least two variables. (Author)
NASA Technical Reports Server (NTRS)
Chang, Chi-Yung (Inventor); Fang, Wai-Chi (Inventor); Curlander, John C. (Inventor)
1995-01-01
A system for data compression utilizing systolic array architecture for Vector Quantization (VQ) is disclosed for both full-searched and tree-searched. For a tree-searched VQ, the special case of a Binary Tree-Search VQ (BTSVQ) is disclosed with identical Processing Elements (PE) in the array for both a Raw-Codebook VQ (RCVQ) and a Difference-Codebook VQ (DCVQ) algorithm. A fault tolerant system is disclosed which allows a PE that has developed a fault to be bypassed in the array and replaced by a spare at the end of the array, with codebook memory assignment shifted one PE past the faulty PE of the array.
Hasan, Mehedi; Kotov, Alexander; Carcone, April; Dong, Ming; Naar, Sylvie; Hartlieb, Kathryn Brogan
2016-08-01
This study examines the effectiveness of state-of-the-art supervised machine learning methods in conjunction with different feature types for the task of automatic annotation of fragments of clinical text based on codebooks with a large number of categories. We used a collection of motivational interview transcripts consisting of 11,353 utterances, which were manually annotated by two human coders as the gold standard, and experimented with state-of-art classifiers, including Naïve Bayes, J48 Decision Tree, Support Vector Machine (SVM), Random Forest (RF), AdaBoost, DiscLDA, Conditional Random Fields (CRF) and Convolutional Neural Network (CNN) in conjunction with lexical, contextual (label of the previous utterance) and semantic (distribution of words in the utterance across the Linguistic Inquiry and Word Count dictionaries) features. We found out that, when the number of classes is large, the performance of CNN and CRF is inferior to SVM. When only lexical features were used, interview transcripts were automatically annotated by SVM with the highest classification accuracy among all classifiers of 70.8%, 61% and 53.7% based on the codebooks consisting of 17, 20 and 41 codes, respectively. Using contextual and semantic features, as well as their combination, in addition to lexical ones, improved the accuracy of SVM for annotation of utterances in motivational interview transcripts with a codebook consisting of 17 classes to 71.5%, 74.2%, and 75.1%, respectively. Our results demonstrate the potential of using machine learning methods in conjunction with lexical, semantic and contextual features for automatic annotation of clinical interview transcripts with near-human accuracy. Copyright © 2016 Elsevier Inc. All rights reserved.
Fusion of Deep Learning and Compressed Domain features for Content Based Image Retrieval.
Liu, Peizhong; Guo, Jing-Ming; Wu, Chi-Yi; Cai, Danlin
2017-08-29
This paper presents an effective image retrieval method by combining high-level features from Convolutional Neural Network (CNN) model and low-level features from Dot-Diffused Block Truncation Coding (DDBTC). The low-level features, e.g., texture and color, are constructed by VQ-indexed histogram from DDBTC bitmap, maximum, and minimum quantizers. Conversely, high-level features from CNN can effectively capture human perception. With the fusion of the DDBTC and CNN features, the extended deep learning two-layer codebook features (DL-TLCF) is generated using the proposed two-layer codebook, dimension reduction, and similarity reweighting to improve the overall retrieval rate. Two metrics, average precision rate (APR) and average recall rate (ARR), are employed to examine various datasets. As documented in the experimental results, the proposed schemes can achieve superior performance compared to the state-of-the-art methods with either low- or high-level features in terms of the retrieval rate. Thus, it can be a strong candidate for various image retrieval related applications.
Signal Prediction With Input Identification
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Chen, Ya-Chin
1999-01-01
A novel coding technique is presented for signal prediction with applications including speech coding, system identification, and estimation of input excitation. The approach is based on the blind equalization method for speech signal processing in conjunction with the geometric subspace projection theory to formulate the basic prediction equation. The speech-coding problem is often divided into two parts, a linear prediction model and excitation input. The parameter coefficients of the linear predictor and the input excitation are solved simultaneously and recursively by a conventional recursive least-squares algorithm. The excitation input is computed by coding all possible outcomes into a binary codebook. The coefficients of the linear predictor and excitation, and the index of the codebook can then be used to represent the signal. In addition, a variable-frame concept is proposed to block the same excitation signal in sequence in order to reduce the storage size and increase the transmission rate. The results of this work can be easily extended to the problem of disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. Simulations are included to demonstrate the proposed method.
Analysis of the possibility of using G.729 codec for steganographic transmission
NASA Astrophysics Data System (ADS)
Piotrowski, Zbigniew; Ciołek, Michał; Dołowski, Jerzy; Wojtuń, Jarosław
2017-04-01
Network steganography is dedicated in particular for those communication services for which there are no bridges or nodes carrying out unintentional attacks on steganographic sequence. In order to set up a hidden communication channel the method of data encoding and decoding was implemented using code books of codec G.729. G.729 codec includes, in its construction, linear prediction vocoder CS-ACELP (Conjugate Structure Algebraic Code Excited Linear Prediction), and by modifying the binary content of the codebook, it is easy to change a binary output stream. The article describes the results of research on the selection of these bits of the codebook codec G.729 which the negation of the least have influence to the loss of quality and fidelity of the output signal. The study was performed with the use of subjective and objective listening tests.
Vector Sum Excited Linear Prediction (VSELP) speech coding at 4.8 kbps
NASA Technical Reports Server (NTRS)
Gerson, Ira A.; Jasiuk, Mark A.
1990-01-01
Code Excited Linear Prediction (CELP) speech coders exhibit good performance at data rates as low as 4800 bps. The major drawback to CELP type coders is their larger computational requirements. The Vector Sum Excited Linear Prediction (VSELP) speech coder utilizes a codebook with a structure which allows for a very efficient search procedure. Other advantages of the VSELP codebook structure is discussed and a detailed description of a 4.8 kbps VSELP coder is given. This coder is an improved version of the VSELP algorithm, which finished first in the NSA's evaluation of the 4.8 kbps speech coders. The coder uses a subsample resolution single tap long term predictor, a single VSELP excitation codebook, a novel gain quantizer which is robust to channel errors, and a new adaptive pre/postfilter arrangement.
Compression of next-generation sequencing quality scores using memetic algorithm
2014-01-01
Background The exponential growth of next-generation sequencing (NGS) derived DNA data poses great challenges to data storage and transmission. Although many compression algorithms have been proposed for DNA reads in NGS data, few methods are designed specifically to handle the quality scores. Results In this paper we present a memetic algorithm (MA) based NGS quality score data compressor, namely MMQSC. The algorithm extracts raw quality score sequences from FASTQ formatted files, and designs compression codebook using MA based multimodal optimization. The input data is then compressed in a substitutional manner. Experimental results on five representative NGS data sets show that MMQSC obtains higher compression ratio than the other state-of-the-art methods. Particularly, MMQSC is a lossless reference-free compression algorithm, yet obtains an average compression ratio of 22.82% on the experimental data sets. Conclusions The proposed MMQSC compresses NGS quality score data effectively. It can be utilized to improve the overall compression ratio on FASTQ formatted files. PMID:25474747
1983-12-01
in to material of manufacture and form, organized to segregate material, style, and manufacturing techniques of functional and chronological...a system for classifying arti- facts and artifact fragments according to material of manufacture as veil as form, organized to segregate material...style, and manufacturing tech- niques of functional and chronological significance. The codebook manual contains instructions for making critical
SIFT Meets CNN: A Decade Survey of Instance Retrieval.
Zheng, Liang; Yang, Yi; Tian, Qi
2018-05-01
In the early days, content-based image retrieval (CBIR) was studied with global features. Since 2003, image retrieval based on local descriptors (de facto SIFT) has been extensively studied for over a decade due to the advantage of SIFT in dealing with image transformations. Recently, image representations based on the convolutional neural network (CNN) have attracted increasing interest in the community and demonstrated impressive performance. Given this time of rapid evolution, this article provides a comprehensive survey of instance retrieval over the last decade. Two broad categories, SIFT-based and CNN-based methods, are presented. For the former, according to the codebook size, we organize the literature into using large/medium-sized/small codebooks. For the latter, we discuss three lines of methods, i.e., using pre-trained or fine-tuned CNN models, and hybrid methods. The first two perform a single-pass of an image to the network, while the last category employs a patch-based feature extraction scheme. This survey presents milestones in modern instance retrieval, reviews a broad selection of previous works in different categories, and provides insights on the connection between SIFT and CNN-based methods. After analyzing and comparing retrieval performance of different categories on several datasets, we discuss promising directions towards generic and specialized instance retrieval.
Context-Aware Local Binary Feature Learning for Face Recognition.
Duan, Yueqi; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie
2018-05-01
In this paper, we propose a context-aware local binary feature learning (CA-LBFL) method for face recognition. Unlike existing learning-based local face descriptors such as discriminant face descriptor (DFD) and compact binary face descriptor (CBFD) which learn each feature code individually, our CA-LBFL exploits the contextual information of adjacent bits by constraining the number of shifts from different binary bits, so that more robust information can be exploited for face representation. Given a face image, we first extract pixel difference vectors (PDV) in local patches, and learn a discriminative mapping in an unsupervised manner to project each pixel difference vector into a context-aware binary vector. Then, we perform clustering on the learned binary codes to construct a codebook, and extract a histogram feature for each face image with the learned codebook as the final representation. In order to exploit local information from different scales, we propose a context-aware local binary multi-scale feature learning (CA-LBMFL) method to jointly learn multiple projection matrices for face representation. To make the proposed methods applicable for heterogeneous face recognition, we present a coupled CA-LBFL (C-CA-LBFL) method and a coupled CA-LBMFL (C-CA-LBMFL) method to reduce the modality gap of corresponding heterogeneous faces in the feature level, respectively. Extensive experimental results on four widely used face datasets clearly show that our methods outperform most state-of-the-art face descriptors.
McElroy, L. M.; Woods, D. M.; Yanes, A. F.; Skaro, A. I.; Daud, A.; Curtis, T.; Wymore, E.; Holl, J. L.; Abecassis, M. M.; Ladner, D. P.
2016-01-01
Objective Efforts to improve patient safety are challenged by the lack of universally agreed upon terms. The International Classification for Patient Safety (ICPS) was developed by the World Health Organization for this purpose. This study aimed to test the applicability of the ICPS to a surgical population. Design A web-based safety debriefing was sent to clinicians involved in surgical care of abdominal organ transplant patients. A multidisciplinary team of patient safety experts, surgeons and researchers used the data to develop a system of classification based on the ICPS. Disagreements were reconciled via consensus, and a codebook was developed for future use by researchers. Results A total of 320 debriefing responses were used for the initial review and codebook development. In total, the 320 debriefing responses contained 227 patient safety incidents (range: 0–7 per debriefing) and 156 contributing factors/hazards (0–5 per response). The most common severity classification was ‘reportable circumstance,’ followed by ‘near miss.’ The most common incident types were ‘resources/organizational management,’ followed by ‘medical device/equipment.’ Several aspects of surgical care were encompassed by more than one classification, including operating room scheduling, delays in care, trainee-related incidents, interruptions and handoffs. Conclusions This study demonstrates that a framework for patient safety can be applied to facilitate the organization and analysis of surgical safety data. Several unique aspects of surgical care require consideration, and by using a standardized framework for describing concepts, research findings can be compared and disseminated across surgical specialties. The codebook is intended for use as a framework for other specialties and institutions. PMID:26803539
Enhancing speech recognition using improved particle swarm optimization based hidden Markov model.
Selvaraj, Lokesh; Ganesan, Balakrishnan
2014-01-01
Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO) is suggested. The suggested methodology contains four stages, namely, (i) denoising, (ii) feature mining (iii), vector quantization, and (iv) IPSO based hidden Markov model (HMM) technique (IP-HMM). At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC), mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy.
1987-04-01
NA NA NA YES 89 THE 1986 ARI SURVEY OF ARMY RECRUITS: CODEBOOK FOR SUMMER 86 USAR 6 ARNG SURVEY RESPONDENTS SSN WHAT IS YOUR SOCIAL SECURITY... Social Sciences Approved (of public relcise distribution unlimited U. S. ARMY RESEARCH INSTITUTE FOR THE BEHAVIORAL AND SOCIAL SCIENCES A Field...RtMirch Institut* for tn« Behavioral and Social Sciancas. NOTE: ThU Rasaarch Product Is not to ba construad as an olflclal Dapartmant of tha Army
Iris Image Classification Based on Hierarchical Visual Codebook.
Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang
2014-06-01
Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.
Armed Forces 1996 Equal Opportunity Survey: Administration, Datasets, and Codebook
1997-12-01
was taken in the preparation of analysis files. These files balance two needs: public access to data with sufficient information for accurate estimates...Native American/Alaskan Native, and Other. The duty location variable has two levels: US (a duty station in any of the 50 states or the District of...More specifically, the new DMDC procedures most closely follow CASRO’s Sample Type U design. As discussed by CASRO, the overall response rate has two
ERIC Educational Resources Information Center
Tourangeau, Karen; Nord, Christine; Le, Thanh; Sorongon, Alberto G.; Najarian, Michelle
2009-01-01
This manual provides guidance and documentation for users of the eighth-grade data of the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 (ECLS-K). It begins with an overview of the ECLS-K study. Subsequent chapters provide details on the instruments and measures used, the sample design, weighting procedures, response rates, data…
Information preserving coding for multispectral data
NASA Technical Reports Server (NTRS)
Duan, J. R.; Wintz, P. A.
1973-01-01
A general formulation of the data compression system is presented. A method of instantaneous expansion of quantization levels by reserving two codewords in the codebook to perform a folding over in quantization is implemented for error free coding of data with incomplete knowledge of the probability density function. Results for simple DPCM with folding and an adaptive transform coding technique followed by a DPCM technique are compared using ERTS-1 data.
Interframe vector wavelet coding technique
NASA Astrophysics Data System (ADS)
Wus, John P.; Li, Weiping
1997-01-01
Wavelet coding is often used to divide an image into multi- resolution wavelet coefficients which are quantized and coded. By 'vectorizing' scalar wavelet coding and combining this with vector quantization (VQ), vector wavelet coding (VWC) can be implemented. Using a finite number of states, finite-state vector quantization (FSVQ) takes advantage of the similarity between frames by incorporating memory into the video coding system. Lattice VQ eliminates the potential mismatch that could occur using pre-trained VQ codebooks. It also eliminates the need for codebook storage in the VQ process, thereby creating a more robust coding system. Therefore, by using the VWC coding method in conjunction with the FSVQ system and lattice VQ, the formulation of a high quality very low bit rate coding systems is proposed. A coding system using a simple FSVQ system where the current state is determined by the previous channel symbol only is developed. To achieve a higher degree of compression, a tree-like FSVQ system is implemented. The groupings are done in this tree-like structure from the lower subbands to the higher subbands in order to exploit the nature of subband analysis in terms of the parent-child relationship. Class A and Class B video sequences from the MPEG-IV testing evaluations are used in the evaluation of this coding method.
Hazardous sign detection for safety applications in traffic monitoring
NASA Astrophysics Data System (ADS)
Benesova, Wanda; Kottman, Michal; Sidla, Oliver
2012-01-01
The transportation of hazardous goods in public streets systems can pose severe safety threats in case of accidents. One of the solutions for these problems is an automatic detection and registration of vehicles which are marked with dangerous goods signs. We present a prototype system which can detect a trained set of signs in high resolution images under real-world conditions. This paper compares two different methods for the detection: bag of visual words (BoW) procedure and our approach presented as pairs of visual words with Hough voting. The results of an extended series of experiments are provided in this paper. The experiments show that the size of visual vocabulary is crucial and can significantly affect the recognition success rate. Different code-book sizes have been evaluated for this detection task. The best result of the first method BoW was 67% successfully recognized hazardous signs, whereas the second method proposed in this paper - pairs of visual words and Hough voting - reached 94% of correctly detected signs. The experiments are designed to verify the usability of the two proposed approaches in a real-world scenario.
ERIC Educational Resources Information Center
Tourangeau, Karen; Nord, Christine; Lê, Thanh; Sorongon, Alberto G.; Hagedorn, Mary C.; Daly, Peggy; Najarian, Michelle
2015-01-01
This manual provides guidance and documentation for users of the kindergarten (or base year) data of the Early Childhood Longitudinal Study, Kindergarten Class of 2010-11 (ECLS-K:2011). It begins with an overview of the ECLS-K:2011. Subsequent chapters provide details on the study data collection instruments and methods; the direct and indirect…
BSIFT: toward data-independent codebook for large scale image search.
Zhou, Wengang; Li, Houqiang; Hong, Richang; Lu, Yijuan; Tian, Qi
2015-03-01
Bag-of-Words (BoWs) model based on Scale Invariant Feature Transform (SIFT) has been widely used in large-scale image retrieval applications. Feature quantization by vector quantization plays a crucial role in BoW model, which generates visual words from the high- dimensional SIFT features, so as to adapt to the inverted file structure for the scalable retrieval. Traditional feature quantization approaches suffer several issues, such as necessity of visual codebook training, limited reliability, and update inefficiency. To avoid the above problems, in this paper, a novel feature quantization scheme is proposed to efficiently quantize each SIFT descriptor to a descriptive and discriminative bit-vector, which is called binary SIFT (BSIFT). Our quantizer is independent of image collections. In addition, by taking the first 32 bits out from BSIFT as code word, the generated BSIFT naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error is reduced by feature filtering, code word expansion, and query sensitive mask shielding. Without any explicit codebook for quantization, our approach can be readily applied in image search in some resource-limited scenarios. We evaluate the proposed algorithm for large scale image search on two public image data sets. Experimental results demonstrate the index efficiency and retrieval accuracy of our approach.
Texton-based analysis of paintings
NASA Astrophysics Data System (ADS)
van der Maaten, Laurens J. P.; Postma, Eric O.
2010-08-01
The visual examination of paintings is traditionally performed by skilled art historians using their eyes. Recent advances in intelligent systems may support art historians in determining the authenticity or date of creation of paintings. In this paper, we propose a technique for the examination of brushstroke structure that views the wildly overlapping brushstrokes as texture. The analysis of the painting texture is performed with the help of a texton codebook, i.e., a codebook of small prototypical textural patches. The texton codebook can be learned from a collection of paintings. Our textural analysis technique represents paintings in terms of histograms that measure the frequency by which the textons in the codebook occur in the painting (so-called texton histograms). We present experiments that show the validity and effectiveness of our technique for textural analysis on a collection of digitized high-resolution reproductions of paintings by Van Gogh and his contemporaries. As texton histograms cannot be easily be interpreted by art experts, the paper proposes to approaches to visualize the results on the textural analysis. The first approach visualizes the similarities between the histogram representations of paintings by employing a recently proposed dimensionality reduction technique, called t-SNE. We show that t-SNE reveals a clear separation of paintings created by Van Gogh and those created by other painters. In addition, the period of creation is faithfully reflected in the t-SNE visualizations. The second approach visualizes the similarities and differences between paintings by highlighting regions in a painting in which the textural structure of the painting is unusual. We illustrate the validity of this approach by means of an experiment in which we highlight regions in a painting by Monet that are not very "Van Gogh-like". Taken together, we believe the tools developed in this study are well capable of assisting for art historians in support of their study of paintings.
Real-time speech encoding based on Code-Excited Linear Prediction (CELP)
NASA Technical Reports Server (NTRS)
Leblanc, Wilfrid P.; Mahmoud, S. A.
1988-01-01
This paper reports on the work proceeding with regard to the development of a real-time voice codec for the terrestrial and satellite mobile radio environments. The codec is based on a complexity reduced version of code-excited linear prediction (CELP). The codebook search complexity was reduced to only 0.5 million floating point operations per second (MFLOPS) while maintaining excellent speech quality. Novel methods to quantize the residual and the long and short term model filters are presented.
Pedestrian Detection in Far-Infrared Daytime Images Using a Hierarchical Codebook of SURF
Besbes, Bassem; Rogozan, Alexandrina; Rus, Adela-Maria; Bensrhair, Abdelaziz; Broggi, Alberto
2015-01-01
One of the main challenges in intelligent vehicles concerns pedestrian detection for driving assistance. Recent experiments have showed that state-of-the-art descriptors provide better performances on the far-infrared (FIR) spectrum than on the visible one, even in daytime conditions, for pedestrian classification. In this paper, we propose a pedestrian detector with on-board FIR camera. Our main contribution is the exploitation of the specific characteristics of FIR images to design a fast, scale-invariant and robust pedestrian detector. Our system consists of three modules, each based on speeded-up robust feature (SURF) matching. The first module allows generating regions-of-interest (ROI), since in FIR images of the pedestrian shapes may vary in large scales, but heads appear usually as light regions. ROI are detected with a high recall rate with the hierarchical codebook of SURF features located in head regions. The second module consists of pedestrian full-body classification by using SVM. This module allows one to enhance the precision with low computational cost. In the third module, we combine the mean shift algorithm with inter-frame scale-invariant SURF feature tracking to enhance the robustness of our system. The experimental evaluation shows that our system outperforms, in the FIR domain, the state-of-the-art Haar-like Adaboost-cascade, histogram of oriented gradients (HOG)/linear SVM (linSVM) and MultiFtrpedestrian detectors, trained on the FIR images. PMID:25871724
Codebook-based electrooculography data analysis towards cognitive activity recognition.
Lagodzinski, P; Shirahama, K; Grzegorzek, M
2018-04-01
With the advancement in mobile/wearable technology, people started to use a variety of sensing devices to track their daily activities as well as health and fitness conditions in order to improve the quality of life. This work addresses an idea of eye movement analysis, which due to the strong correlation with cognitive tasks can be successfully utilized in activity recognition. Eye movements are recorded using an electrooculographic (EOG) system built into the frames of glasses, which can be worn more unobtrusively and comfortably than other devices. Since the obtained information is low-level sensor data expressed as a sequence representing values in constant intervals (100 Hz), the cognitive activity recognition problem is formulated as sequence classification. However, it is unclear what kind of features are useful for accurate cognitive activity recognition. Thus, a machine learning algorithm like a codebook approach is applied, which instead of focusing on feature engineering is using a distribution of characteristic subsequences (codewords) to describe sequences of recorded EOG data, where the codewords are obtained by clustering a large number of subsequences. Further, statistical analysis of the codeword distribution results in discovering features which are characteristic to a certain activity class. Experimental results demonstrate good accuracy of the codebook-based cognitive activity recognition reflecting the effective usage of the codewords. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Manohar, Mareboyana; Tilton, James C.
1994-01-01
A progressive vector quantization (VQ) compression approach is discussed which decomposes image data into a number of levels using full search VQ. The final level is losslessly compressed, enabling lossless reconstruction. The computational difficulties are addressed by implementation on a massively parallel SIMD machine. We demonstrate progressive VQ on multispectral imagery obtained from the Advanced Very High Resolution Radiometer instrument and other Earth observation image data, and investigate the trade-offs in selecting the number of decomposition levels and codebook training method.
High Performance Compression of Science Data
NASA Technical Reports Server (NTRS)
Storer, James A.; Carpentieri, Bruno; Cohn, Martin
1994-01-01
Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.
Predictive Multiple Model Switching Control with the Self-Organizing Map
NASA Technical Reports Server (NTRS)
Motter, Mark A.
2000-01-01
A predictive, multiple model control strategy is developed by extension of self-organizing map (SOM) local dynamic modeling of nonlinear autonomous systems to a control framework. Multiple SOMs collectively model the global response of a nonautonomous system to a finite set of representative prototype controls. Each SOM provides a codebook representation of the dynamics corresponding to a prototype control. Different dynamic regimes are organized into topological neighborhoods where the adjacent entries in the codebook represent the global minimization of a similarity metric. The SOM is additionally employed to identify the local dynamical regime, and consequently implements a switching scheme that selects the best available model for the applied control. SOM based linear models are used to predict the response to a larger family of control sequences which are clustered on the representative prototypes. The control sequence which corresponds to the prediction that best satisfies the requirements on the system output is applied as the external driving signal.
Multipath search coding of stationary signals with applications to speech
NASA Astrophysics Data System (ADS)
Fehn, H. G.; Noll, P.
1982-04-01
This paper deals with the application of multipath search coding (MSC) concepts to the coding of stationary memoryless and correlated sources, and of speech signals, at a rate of one bit per sample. Use is made of three MSC classes: (1) codebook coding, or vector quantization, (2) tree coding, and (3) trellis coding. This paper explains the performances of these coders and compares them both with those of conventional coders and with rate-distortion bounds. The potentials of MSC coding strategies are demonstrated by illustrations. The paper reports also on results of MSC coding of speech, where both the strategy of adaptive quantization and of adaptive prediction were included in coder design.
High performance compression of science data
NASA Technical Reports Server (NTRS)
Storer, James A.; Cohn, Martin
1994-01-01
Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in the interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.
Subband directional vector quantization in radiological image compression
NASA Astrophysics Data System (ADS)
Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel
1992-05-01
The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.
Symbolic feature detection for image understanding
NASA Astrophysics Data System (ADS)
Aslan, Sinem; Akgül, Ceyhun Burak; Sankur, Bülent
2014-03-01
In this study we propose a model-driven codebook generation method used to assign probability scores to pixels in order to represent underlying local shapes they reside in. In the first version of the symbol library we limited ourselves to photometric and similarity transformations applied on eight prototypical shapes of flat plateau , ramp, valley, ridge, circular and elliptic respectively pit and hill and used randomized decision forest as the statistical classifier to compute shape class ambiguity of each pixel. We achieved90% accuracy in identification of known objects from alternate views, however, we could not outperform texture, global and local shape methods, but only color-based method in recognition of unknown objects. We present a progress plan to be accomplished as a future work to improve the proposed approach further.
Trucks involved in fatal accidents codebook 2008.
DOT National Transportation Integrated Search
2011-01-01
This report provides documentation for UMTRIs file of Trucks Involved in Fatal Accidents : (TIFA), 2008, including distributions of the code values for each variable in the file. The 2008 : TIFA file is a census of all medium and heavy trucks invo...
Buses involved in fatal accidents codebook 2008.
DOT National Transportation Integrated Search
2011-03-01
This report provides documentation for UMTRIs file of Buses Involved in Fatal Accidents (BIFA), 2008, : including distributions of the code values for each variable in the file. The 2008 BIFA file is a census of all : buses involved in a fatal acc...
Buses involved in fatal accidents codebook 2007.
DOT National Transportation Integrated Search
2009-12-01
This report provides documentation for UMTRIs file of Buses Involved in Fatal Accidents (BIFA), 2007, : including distributions of the code values for each variable in the file. The 2007 BIFA file is a census of all : buses involved in a fatal acc...
Roberts, Megan C; Clyne, Mindy; Kennedy, Amy E; Chambers, David A; Khoury, Muin J
2017-10-26
PurposeImplementation science offers methods to evaluate the translation of genomic medicine research into practice. The extent to which the National Institutes of Health (NIH) human genomics grant portfolio includes implementation science is unknown. This brief report's objective is to describe recently funded implementation science studies in genomic medicine in the NIH grant portfolio, and identify remaining gaps.MethodsWe identified investigator-initiated NIH research grants on implementation science in genomic medicine (funding initiated 2012-2016). A codebook was adapted from the literature, three authors coded grants, and descriptive statistics were calculated for each code.ResultsForty-two grants fit the inclusion criteria (~1.75% of investigator-initiated genomics grants). The majority of included grants proposed qualitative and/or quantitative methods with cross-sectional study designs, and described clinical settings and primarily white, non-Hispanic study populations. Most grants were in oncology and examined genetic testing for risk assessment. Finally, grants lacked the use of implementation science frameworks, and most examined uptake of genomic medicine and/or assessed patient-centeredness.ConclusionWe identified large gaps in implementation science studies in genomic medicine in the funded NIH portfolio over the past 5 years. To move the genomics field forward, investigator-initiated research grants should employ rigorous implementation science methods within diverse settings and populations.Genetics in Medicine advance online publication, 26 October 2017; doi:10.1038/gim.2017.180.
Large-scale classification of traffic signs under real-world conditions
NASA Astrophysics Data System (ADS)
Hazelhoff, Lykele; Creusen, Ivo; van de Wouw, Dennis; de With, Peter H. N.
2012-02-01
Traffic sign inventories are important to governmental agencies as they facilitate evaluation of traffic sign locations and are beneficial for road and sign maintenance. These inventories can be created (semi-)automatically based on street-level panoramic images. In these images, object detection is employed to detect the signs in each image, followed by a classification stage to retrieve the specific sign type. Classification of traffic signs is a complicated matter, since sign types are very similar with only minor differences within the sign, a high number of different signs is involved and multiple distortions occur, including variations in capturing conditions, occlusions, viewpoints and sign deformations. Therefore, we propose a method for robust classification of traffic signs, based on the Bag of Words approach for generic object classification. We extend the approach with a flexible, modular codebook to model the specific features of each sign type independently, in order to emphasize at the inter-sign differences instead of the parts common for all sign types. Additionally, this allows us to model and label the present false detections. Furthermore, analysis of the classification output provides the unreliable results. This classification system has been extensively tested for three different sign classes, covering 60 different sign types in total. These three data sets contain the sign detection results on street-level panoramic images, extracted from a country-wide database. The introduction of the modular codebook shows a significant improvement for all three sets, where the system is able to classify about 98% of the reliable results correctly.
Parental Perceptions of Neighborhood Effects in Latino Comunas
Horner, Pilar; Sanchez, Ninive; Castillo, Marcela; Delva, Jorge
2011-01-01
Objectives To obtain rich information about how adult Latinos living in high-poverty/high-drug use neighborhoods perceive and negotiate their environment. Methods In 2008, thirteen adult caregivers in Santiago, Chile were interviewed with open-ended questions to ascertain beliefs about neighborhood effects and drug use. Analysis Inductive analysis was used to develop the codebook/identify trends. Discussion Residents externalized their understanding of drug use and misuse by invoking the concept of delinquent youth. A typology of their perceptions is offered. Learning more about residents’ circumstances may help focus on needs-based interventions. More research with Latino neighborhoods is needed for culturally-competent models of interventions. PMID:22497879
NASA Astrophysics Data System (ADS)
Yang, Shuyu; Mitra, Sunanda
2002-05-01
Due to the huge volumes of radiographic images to be managed in hospitals, efficient compression techniques yielding no perceptual loss in the reconstructed images are becoming a requirement in the storage and management of such datasets. A wavelet-based multi-scale vector quantization scheme that generates a global codebook for efficient storage and transmission of medical images is presented in this paper. The results obtained show that even at low bit rates one is able to obtain reconstructed images with perceptual quality higher than that of the state-of-the-art scalar quantization method, the set partitioning in hierarchical trees.
Video data compression using artificial neural network differential vector quantization
NASA Technical Reports Server (NTRS)
Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.
1991-01-01
An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.
Lee, Kai-Hui; Chiu, Pei-Ling
2013-10-01
Conventional visual cryptography (VC) suffers from a pixel-expansion problem, or an uncontrollable display quality problem for recovered images, and lacks a general approach to construct visual secret sharing schemes for general access structures. We propose a general and systematic approach to address these issues without sophisticated codebook design. This approach can be used for binary secret images in non-computer-aided decryption environments. To avoid pixel expansion, we design a set of column vectors to encrypt secret pixels rather than using the conventional VC-based approach. We begin by formulating a mathematic model for the VC construction problem to find the column vectors for the optimal VC construction, after which we develop a simulated-annealing-based algorithm to solve the problem. The experimental results show that the display quality of the recovered image is superior to that of previous papers.
Trucks involved in fatal accidents codebook 2004 (Version March 23, 2007).
DOT National Transportation Integrated Search
2007-03-01
"This report provides documentation for UMTRIs file of Trucks Involved in Fatal Accidents (TIFA), : 2004, including distributions of the code values for each variable in the file. The 2004 TIFA file is : a census of all medium and heavy trucks inv...
December 2005 Status of Forces Survey of Active Duty Members: Administration, Datasets, and Codebook
2006-06-01
Moderate extent Small extent Not a problem p. Settling damage claims q. Non- reimbursed transportation...costs incurred during the move r. Timeliness of reimbursements s. Accuracy...of reimbursements t. Change in cost of living (Continued) For your most
Trucks involved in fatal accidents codebook 2010 (Version October 22, 2012).
DOT National Transportation Integrated Search
2012-11-01
This report provides documentation for UMTRIs file of Trucks Involved in Fatal Accidents : (TIFA), 2010, including distributions of the code values for each variable in the file. The 2010 : TIFA file is a census of all medium and heavy trucks invo...
Distributed single source coding with side information
NASA Astrophysics Data System (ADS)
Vila-Forcen, Jose E.; Koval, Oleksiy; Voloshynovskiy, Sviatoslav V.
2004-01-01
In the paper we advocate image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: classical image compression is considered from the positions of source coding with side information and, contrarily to the existing scenarios, where side information is given explicitly, side information is created based on deterministic approximation of local image features. We consider an image in the transform domain as a realization of a source with a bounded codebook of symbols where each symbol represents a particular edge shape. The codebook is image independent and plays the role of auxiliary source. Due to the partial availability of side information at both encoder and decoder we treat our problem as a modification of Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available only at decoder. Finally, we present a practical compression algorithm for passport photo images based on our concept that demonstrates the superior performance in very low bit rate regime.
NASA Astrophysics Data System (ADS)
Taoka, Hidekazu; Kishiyama, Yoshihisa; Higuchi, Kenichi; Sawahashi, Mamoru
This paper presents comparisons between common and dedicated reference signals (RSs) for channel estimation in MIMO multiplexing using codebook-based precoding for orthogonal frequency division multiplexing (OFDM) radio access in the Evolved UTRA downlink with frequency division duplexing (FDD). We clarify the best RS structure for precoding-based MIMO multiplexing based on comparisons of the structures in terms of the achievable throughput taking into account the overhead of the common and dedicated RSs and the precoding matrix indication (PMI) signal. Based on extensive simulations on the throughput in 2-by-2 and 4-by-4 MIMO multiplexing with precoding, we clarify that channel estimation based on common RSs multiplied with the precoding matrix indicated by the PMI signal achieves higher throughput compared to that using dedicated RSs irrespective of the number of spatial multiplexing streams when the number of available precoding matrices, i.e., the codebook size, is less than approximately 16 and 32 for 2-by-2 and 4-by-4 MIMO multiplexing, respectively.
NASA Astrophysics Data System (ADS)
Sun, Weiwei; Liu, Xiaoming; Yang, Zhou
2017-07-01
Age-related Macular Degeneration (AMD) is a kind of macular disease which mostly occurs in old people,and it may cause decreased vision or even lead to permanent blindness. Drusen is an important clinical indicator for AMD which can help doctor diagnose disease and decide the strategy of treatment. Optical Coherence Tomography (OCT) is widely used in the diagnosis of ophthalmic diseases, include AMD. In this paper, we propose a classification method based on Multiple Instance Learning (MIL) to detect AMD. Drusen can exist in a few slices of OCT images, and MIL is utilized in our method. We divided the method into two phases: training phase and testing phase. We train the initial features and clustered to create a codebook, and employ the trained classifier in the test set. Experiment results show that our method achieved high accuracy and effectiveness.
Munson, Michelle R; Narendorf, Sarah Carter; Ben-David, Shelly; Cole, Andrea
2018-05-24
Research has shown that how people think about their health (or illnesses) shapes their help-seeking behavior. In this mixed-methods study, we employed a simultaneous concurrent design to explore the perceptions of mental illness among an understudied population: marginalized young adults. Participants were 60 young adults (ages 18-25) who had experienced mood disorders and used multiple public systems of care during their childhoods. Semistructured interviews were conducted to understand participants' illness and treatment experiences during the transition to adulthood. A team of analysts used constant comparison to develop a codebook of the qualitative themes, and quantitative data were examined using SAS 9.3. Findings suggest that some theoretical categories identified in past illness-perceptions frameworks are salient to marginalized young adults (e.g., identity, management-or control-of symptoms), but both the developmental transition to adulthood and experiences with public systems of care add nuanced variations to illness and treatment perceptions. Our study demonstrates that young adults possess a set of beliefs and emotions about their mental health and help-seeking options that need to be better understood to improve engagement and quality of mental health care for this population. Implications for practice, research, and policy are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Trucks involved in fatal accidents codebook 1999 (version September 19, 2001)
DOT National Transportation Integrated Search
2001-09-01
This report provides one-way frequencies for all the vehicles in UMTRI's file of Trucks Involved in Fatal Accidents (TIFA), 1999. The 1999 TIFA file is a census of all medium and heavy trucks involved in a fatal accident in the United States. The TIF...
2000 SURVEY OF RESERVE COMPONENT PERSONNEL: ADMINISTRATION, DATASETS, AND CODEBOOK
2002-07-01
Record Number 1 DPOC * DoD Primary Occupation Code 1860 DR* Physician 1862 DSVCOCC* Duty Occupation 1863 DTYOCC* Duty Occupation 1864 DUPRET...Constructed Pay Grade Group 2 1856 CPAYGRP3 Constructed Pay Grade Group 3 1857 CRACECAT Race/Ethnic Category 2 1858 CSERVICE CService - Member 1859 DPOC
Mobile Visual Search Based on Histogram Matching and Zone Weight Learning
NASA Astrophysics Data System (ADS)
Zhu, Chuang; Tao, Li; Yang, Fan; Lu, Tao; Jia, Huizhu; Xie, Xiaodong
2018-01-01
In this paper, we propose a novel image retrieval algorithm for mobile visual search. At first, a short visual codebook is generated based on the descriptor database to represent the statistical information of the dataset. Then, an accurate local descriptor similarity score is computed by merging the tf-idf weighted histogram matching and the weighting strategy in compact descriptors for visual search (CDVS). At last, both the global descriptor matching score and the local descriptor similarity score are summed up to rerank the retrieval results according to the learned zone weights. The results show that the proposed approach outperforms the state-of-the-art image retrieval method in CDVS.
Method for coding low entrophy data
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu (Inventor)
1995-01-01
A method of lossless data compression for efficient coding of an electronic signal of information sources of very low information rate is disclosed. In this method, S represents a non-negative source symbol set, (s(sub 0), s(sub 1), s(sub 2), ..., s(sub N-1)) of N symbols with s(sub i) = i. The difference between binary digital data is mapped into symbol set S. Consecutive symbols in symbol set S are then paired into a new symbol set Gamma which defines a non-negative symbol set containing the symbols (gamma(sub m)) obtained as the extension of the original symbol set S. These pairs are then mapped into a comma code which is defined as a coding scheme in which every codeword is terminated with the same comma pattern, such as a 1. This allows a direct coding and decoding of the n-bit positive integer digital data differences without the use of codebooks.
2013-01-01
Objectives. I sought to describe current state-wide youth sports traumatic brain injury (TBI) laws and their relationship to prevailing scientific understandings of youth sports TBIs, and to facilitate further research by creating an open-source data set of current laws. Methods. I used Westlaw and LexisNexis databases to create a 50-state data set of youth sports TBI laws enacted between January 2009 and December 2012. I collected and coded the text and citations of each law and developed a protocol and codebook to facilitate future research. Results. Forty-four states and Washington, DC, passed youth sports TBI laws between 2009 and 2012. No state’s youth sports TBI law focuses on primary prevention. Instead, such laws focus on (1) increasing coaches’ and parents’ ability to identify and respond to TBIs and (2) reducing the immediate risk of multiple TBIs. Conclusions. Existing youth sports TBI laws were not designed to reduce initial TBIs. Evaluation is required to assess their effectiveness in reducing the risk and consequences of multiple TBIs. Continued research and evaluation of existing laws will be needed to develop a more comprehensive youth TBI-reduction solution. PMID:23678903
2009-09-01
184 LEGALRESR Recode-Tab:[8] State of legal voting res 72 LITHO * Litho code 473 NOFVAPA* 42a. [42a] Not used FVAP tele:Did not know 363...471 INRECNO Master SCS ID number 472 LITHO Litho code 473 QCOMPF Binary variable indicating if case compl 474 QCOMPN [QCOMPN] Questions
Agency Online: Trends in a University Learning Course
ERIC Educational Resources Information Center
Ligorio, Maria Beatrice; Impedovo, Maria Antonietta; Arcidiacono, Francesco
2017-01-01
This article aims to investigate how university students perform agency in an online course and whether the collaborative nature of the course affects such expression. A total of 11 online web forums involving 18 students (N = 745 posts in total) were qualitatively analysed through the use of a codebook composed of five categories (individual,…
Gain-adaptive vector quantization for medium-rate speech coding
NASA Technical Reports Server (NTRS)
Chen, J.-H.; Gersho, A.
1985-01-01
A class of adaptive vector quantizers (VQs) that can dynamically adjust the 'gain' of codevectors according to the input signal level is introduced. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ coding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. An approach to optimizing the design of gain estimators is introduced. Some of the more obvious techniques for achieving gain adaptation are substantially less effective than the use of optimized gain estimators. A novel design technique that is needed to generate the appropriate gain-normalized codebook for the vector quantizer is introduced. Experimental results show that a significant gain in segmental SNR can be obtained over nonadaptive VQ with a negligible increase in complexity.
Compressed domain indexing of losslessly compressed images
NASA Astrophysics Data System (ADS)
Schaefer, Gerald
2001-12-01
Image retrieval and image compression have been pursued separately in the past. Only little research has been done on a synthesis of the two by allowing image retrieval to be performed directly in the compressed domain of images without the need to uncompress them first. In this paper methods for image retrieval in the compressed domain of losslessly compressed images are introduced. While most image compression techniques are lossy, i.e. discard visually less significant information, lossless techniques are still required in fields like medical imaging or in situations where images must not be changed due to legal reasons. The algorithms in this paper are based on predictive coding methods where a pixel is encoded based on the pixel values of its (already encoded) neighborhood. The first method is based on an understanding that predictively coded data is itself indexable and represents a textural description of the image. The second method operates directly on the entropy encoded data by comparing codebooks of images. Experiments show good image retrieval results for both approaches.
Chen, Jing; Tang, Yuan Yan; Chen, C L Philip; Fang, Bin; Lin, Yuewei; Shang, Zhaowei
2014-12-01
Protein subcellular location prediction aims to predict the location where a protein resides within a cell using computational methods. Considering the main limitations of the existing methods, we propose a hierarchical multi-label learning model FHML for both single-location proteins and multi-location proteins. The latent concepts are extracted through feature space decomposition and label space decomposition under the nonnegative data factorization framework. The extracted latent concepts are used as the codebook to indirectly connect the protein features to their annotations. We construct dual fuzzy hypergraphs to capture the intrinsic high-order relations embedded in not only feature space, but also label space. Finally, the subcellular location annotation information is propagated from the labeled proteins to the unlabeled proteins by performing dual fuzzy hypergraph Laplacian regularization. The experimental results on the six protein benchmark datasets demonstrate the superiority of our proposed method by comparing it with the state-of-the-art methods, and illustrate the benefit of exploiting both feature correlations and label correlations.
Situation Model for Situation-Aware Assistance of Dementia Patients in Outdoor Mobility
Yordanova, Kristina; Koldrack, Philipp; Heine, Christina; Henkel, Ron; Martin, Mike; Teipel, Stefan; Kirste, Thomas
2017-01-01
Background: Dementia impairs spatial orientation and route planning, thus often affecting the patient’s ability to move outdoors and maintain social activities. Situation-aware deliberative assistive technology devices (ATD) can substitute impaired cognitive function in order to maintain one’s level of social activity. To build such a system, one needs domain knowledge about the patient’s situation and needs. We call this collection of knowledge situation model. Objective: To construct a situation model for the outdoor mobility of people with dementia (PwD). The model serves two purposes: 1) as a knowledge base from which to build an ATD describing the mobility of PwD; and 2) as a codebook for the annotation of the recorded behavior. Methods: We perform systematic knowledge elicitation to obtain the relevant knowledge. The OBO Edit tool is used for implementing and validating the situation model. The model is evaluated by using it as a codebook for annotating the behavior of PwD during a mobility study and interrater agreement is computed. In addition, clinical experts perform manual evaluation and curation of the model. Results: The situation model consists of 101 concepts with 11 relation types between them. The results from the annotation showed substantial overlapping between two annotators (Cohen’s kappa of 0.61). Conclusion: The situation model is a first attempt to systematically collect and organize information related to the outdoor mobility of PwD for the purposes of situation-aware assistance. The model is the base for building an ATD able to provide situation-aware assistance and to potentially improve the quality of life of PwD. PMID:29060937
2008-11-01
component, gender, paygrade, race/ethnicity, ethnic ancestry, education , active duty service, and military installation proximity...5. Female Ancestry refers to your ethnic origin or descent, “roots,” or heritage. It may refer to your parents ’ or ancestors’ country of birth...Pay and benefits .............................. Fair performance evaluations ........... Education and training opportunities
2009-08-01
useful *************************************************************/ array zara (*) TRAININGA TRAININGB TRAININGC TRAININGD TRAININGE TRAININGF...TRAININGFR TRAININGGR TRAININGHR SRCEINFOAR SRCEINFOBR SRCEINFOCR SRCEINFODR SRCEINFOER; do I = 1 to dim( Zara ); Zarb(i)= Zara (i); if Zara (i...60 then Zarb(i) = .; else Zara (i) = Zarb(i); end; Drop i; I-9 /* coding for TRAININGAR2 variable
User-Centered Design of the eyeGuide, a Tailored Glaucoma Behavior Change Program
Killeen, Olivia; MacKenzie, Chamisa; Heisler, Michele; Resnicow, Ken; Lee, Paul P.; Newman-Casey, Paula Anne
2016-01-01
PURPOSE We employed user-centered design to refine a prototype of the eyeGuide, a novel, tailored behavior change program intended to improve medication adherence among glaucoma patients. PATIENTS Glaucoma patients ≥ age 40 prescribed ≥1 glaucoma medication were included. METHODS The eyeGuide consists of tailored educational content and tailored testimonials in which patients share how they were able to overcome barriers to improve their medication adherence. A hybrid of semi-structured diagnostic and pre-testing interviews were used to refine the content of the eyeGuide. Purposeful sampling was used to recruit a study population representative of the glaucoma patient population. Interviews were conducted until thematic saturation was reached. Interviews were audio recorded and transcribed verbatim. Three researchers analyzed the transcripts, generated a codebook and identified key themes using NVivo 10.0 to further refine the eyeGuide. RESULTS Twenty-one glaucoma patients were interviewed; mean age 72 ± 12.4 years, five (24%) African-Americans, nine (43%) with poor self-reported adherence, ten (47.6%) ≥ age 75, ten (47.6%) with poor vision and nine (42.9%) women. Qualitative analysis identified five important themes for improving glaucoma self-management: social support, patient-provider relationship, medication routine, patients’ beliefs about disease and treatment, and eye drop instillation. All participants expressed satisfaction with in-person delivery of the eyeGuide and preferred this to a web-based module. Participant feedback resulted in revised content. CONCLUSIONS User-centered design generated improvements in the eyeGuide that would not have been possible without patient input. Participants expressed satisfaction with the tailored content. PMID:27096721
Defense and Development in Sub-Saharan Africa: Codebook.
1988-03-01
countries by presenting the different data sources and explaining how they were compiled. The statistics in the 0 database cover 41 African countries for...February 1984, pp. 157-164 -vi Finally, in addition to the economic and military data , some statistics have been compiled that monitor social and...32 IX. SOCIAL/POLITICAL STATISTICS ....................................34 SOURCES AND NOTES ON COLLECTION OF DATA
ERIC Educational Resources Information Center
Miller, Warren; Tanter, Raymond
The International Relations Archive undertakes as its primary goals the acquisition, management and dissemination of international affairs data. The first document enclosed is a copy of the final machine readable codebook prepared for the data from the Political Events Project, 1948-1965. Also included is a copy of the final machine-readable…
1981-04-01
variables resulting from the survey and their location on an SPSS system file are documented in a user-oriented codebook. Free responses to an open...and added expense), anguish in transfer of family. (3) The Navy cannot afford to treat people like cattle. If I could have re- signed without a six
Key Themes in Mobile Learning: Prospects for Learner-Generated Learning through AR and VR
ERIC Educational Resources Information Center
Aguayo, Claudio; Cochrane, Thomas; Narayan, Vickel
2017-01-01
This paper summarises the findings from a literature review in mobile learning, developed as part of a 2-year six-institution project in New Zealand. Through the development of a key themes codebook, we address selected key themes with respect to their relevance to learner-generated learning through emerging technologies, with attention to mobile…
2007-06-01
Results .......................................................................17 Table 7. E-mail Address Availability by Active-Duty Service ...the following ten topic areas: 1. Background Information— Service , gender, paygrade, race/ethnicity, ethnic ancestry, and education. 2. Family and...likelihood to stay on active duty, spouse/family support to stay on active duty, years spent in military service , willingness to recommend military
Hispanic Male’s Perspectives of Health Behaviors Related to Weight Management
Garcia, David O.; Valdez, Luis A.; Hooker, Steven P.
2015-01-01
Hispanic males have the highest prevalence of overweight and obesity among men in the United States; yet are significantly underrepresented in weight loss research. The purpose of the current study was to examine Hispanic male’s perspectives of health behaviors related to weight management to refine the methodologies to deliver a gender-sensitive and culturally sensitive weight loss intervention. From October 2014 to April 2015, semistructured interviews were conducted with 14 overweight Hispanic men of ages 18 to 64 years. The interviews lasted approximately 60 minutes. Participants also completed a brief questionnaire and body weight/height were measured. Grounded in a deductive process, a preliminary codebook was developed based on the topics included in the interview guides. A thematic analysis facilitated the identification of inductive themes and the finalization of the codebook used for transcript analysis. Four overarching themes were identified: (a) general health beliefs of how diet and physical activity behaviors affect health outcomes, (b) barriers to healthy eating and physical activity, (c) motivators for change, and (d) viable recruitment and intervention approaches. Future research should examine feasible and appropriate recruitment and intervention strategies identified by Hispanic males to improve weight management in this vulnerable group. PMID:26634854
Aggarwal, Neil K.; DeSilva, Ravi; Nicasio, Andel V.; Boiler, Marit; Lewis-Fernández, Roberto
2014-01-01
Objectives Cross-cultural mental health researchers often analyze patient explanatory models of illness to optimize service provision. The Cultural Formulation Interview (CFI) is a cross-cultural assessment tool released in May 2013 with DSM-5 to revise shortcomings from the DSM-IV Outline for Cultural Formulation (OCF). The CFI field trial took place in 6 countries, 14 sites, and with 321 patients to explore its feasibility, acceptability, and clinical utility with patients and clinicians. We sought to analyze if and how CFI feasibility, acceptability, and clinical utility were related to patient-clinician communication. Design We report data from the New York site which enrolled 7 clinicians and 32 patients in 32 patient-clinician dyads. We undertook a data analysis independent of the parent field trial by conducting content analyses of debriefing interviews with all participants (n=64) based on codebooks derived from frameworks for medical communication and implementation outcomes. Three coders created codebooks, coded independently, established inter-rater coding reliability, and analyzed if the CFI affects medical communication with respect to feasibility, acceptability, and clinical utility. Results Despite racial, ethnic, cultural, and professional differences within our group of patients and clinicians, we found that promoting satisfaction through the interview, eliciting data, eliciting the patient’s perspective, and perceiving data at multiple levels were common codes that explained how the CFI affected medical communication. We also found that all but 2 codes fell under the implementation outcome of clinical utility, 2 fell under acceptability, and none fell under feasibility. Conclusion Our study offers new directions for research on how a cultural interview affects patient-clinician communication. Future research can analyze how the CFI and other cultural interviews impact medical communication in clinical settings with subsequent effects on outcomes such as medication adherence, appointment retention, and health condition. PMID:25372242
Education Longitudinal Study of 2002: Base Year Data File User's Manual. NCES 2004-405
ERIC Educational Resources Information Center
Ingels, Steven J.; Pratt, Daniel J.; Rogers, James E.; Siegel, Peter H.; Stutts, Ellen S.
2004-01-01
This manual has been produced to familiarize data users with the procedures followed for data collection and processing for the base year of the Education Longitudinal Study of 2002 (ELS:2002). It also provides the necessary documentation for use of the public-use data files, as they appear on the ELS:2002 base year Electronic Codebook (ECB). Most…
2000 Military Recruiter Survey: Administration, Datasets and Codebook
2002-08-01
recruiters who have not learned everything necessary from their training…………………...... f. Recruiters need constant pressure in order for them to make their...Distance learning ......................... j. Filling out electronic forms........... k. Other........................................... Background...e. It is my job to teach recruiters who have not learned everything necessary from their training…………………...... f. Recruiters need constant
ERIC Educational Resources Information Center
Cincotta, Dominic
2014-01-01
This research studies how brand identities, individually and communally, as read through websites are created among small and medium-sized enterprise breweries in western Pennsylvania. Content analysis through the frame of Kress and van Leeuwen was used as the basis for the codebook that reads each brand identity for the researcher. The…
ERIC Educational Resources Information Center
Whitener, Summer D.; Gruber, Kerry J.; Rohr, Carol L.; Fondelier, Sharon E.
The Teacher Followup Survey (TFS) is a 1-year followup of a sample of teachers who were originally selected for the Teacher Questionnaire of the Schools and Staffing Survey (SASS) of the National Center for Education Statistics. There have been three data cycles for the SASS and three TFS versions. This data file user's manual enables the user to…
ERIC Educational Resources Information Center
Detlor, Brian; Ball, Kathryn
2015-01-01
This paper examines the merit of conducting a qualitative analysis of LibQUAL+® survey comments as a means of leveraging quantitative LibQUAL+ results, and using importance-satisfaction matrices to present and assess qualitative findings. Comments collected from the authors' institution's LibQUAL+ survey were analyzed using a codebook based on…
August 2005 Status of Forces Survey of Active-Duty Members: Administration, Datasets, and Codebook
2005-09-01
14. Military/Civilian Comparisons—Comparisons of military to the civilian world , including promotion opportunities, hours worked, compensation...MILITARY/CIVILIAN COMPARISONS 110. How do the following opportunities in the military compare to opportunities in the civilian world ? Much...opportunities in the military compare to opportunities in the civilian world ? Much better as a civilian Somewhat better as a civilian No
A robust hidden Markov Gauss mixture vector quantizer for a noisy source.
Pyun, Kyungsuk Peter; Lim, Johan; Gray, Robert M
2009-07-01
Noise is ubiquitous in real life and changes image acquisition, communication, and processing characteristics in an uncontrolled manner. Gaussian noise and Salt and Pepper noise, in particular, are prevalent in noisy communication channels, camera and scanner sensors, and medical MRI images. It is not unusual for highly sophisticated image processing algorithms developed for clean images to malfunction when used on noisy images. For example, hidden Markov Gauss mixture models (HMGMM) have been shown to perform well in image segmentation applications, but they are quite sensitive to image noise. We propose a modified HMGMM procedure specifically designed to improve performance in the presence of noise. The key feature of the proposed procedure is the adjustment of covariance matrices in Gauss mixture vector quantizer codebooks to minimize an overall minimum discrimination information distortion (MDI). In adjusting covariance matrices, we expand or shrink their elements based on the noisy image. While most results reported in the literature assume a particular noise type, we propose a framework without assuming particular noise characteristics. Without denoising the corrupted source, we apply our method directly to the segmentation of noisy sources. We apply the proposed procedure to the segmentation of aerial images with Salt and Pepper noise and with independent Gaussian noise, and we compare our results with those of the median filter restoration method and the blind deconvolution-based method, respectively. We show that our procedure has better performance than image restoration-based techniques and closely matches to the performance of HMGMM for clean images in terms of both visual segmentation results and error rate.
Control of the NASA Langley 16-Foot Transonic Tunnel with the Self-Organizing Feature Map
NASA Technical Reports Server (NTRS)
Motter, Mark A.
1998-01-01
A predictive, multiple model control strategy is developed based on an ensemble of local linear models of the nonlinear system dynamics for a transonic wind tunnel. The local linear models are estimated directly from the weights of a Self Organizing Feature Map (SOFM). Local linear modeling of nonlinear autonomous systems with the SOFM is extended to a control framework where the modeled system is nonautonomous, driven by an exogenous input. This extension to a control framework is based on the consideration of a finite number of subregions in the control space. Multiple self organizing feature maps collectively model the global response of the wind tunnel to a finite set of representative prototype controls. These prototype controls partition the control space and incorporate experimental knowledge gained from decades of operation. Each SOFM models the combination of the tunnel with one of the representative controls, over the entire range of operation. The SOFM based linear models are used to predict the tunnel response to a larger family of control sequences which are clustered on the representative prototypes. The control sequence which corresponds to the prediction that best satisfies the requirements on the system output is applied as the external driving signal. Each SOFM provides a codebook representation of the tunnel dynamics corresponding to a prototype control. Different dynamic regimes are organized into topological neighborhoods where the adjacent entries in the codebook represent the minimization of a similarity metric which is the essence of the self organizing feature of the map. Thus, the SOFM is additionally employed to identify the local dynamical regime, and consequently implements a switching scheme than selects the best available model for the applied control. Experimental results of controlling the wind tunnel, with the proposed method, during operational runs where strict research requirements on the control of the Mach number were met, are presented. Comparison to similar runs under the same conditions with the tunnel controlled by either the existing controller or an expert operator indicate the superiority of the method.
1979 Reserve Force Studies Surveys: User’s Manual and Codebooks.
1981-09-01
units, for instance, artillery, which have the same manpower demand characteristics (similar size, skills and grade structure) provided better...personnel groups. With random cluster sampling, the pattern of questionnaire returns for each group of analytic interest should match the Guard and...marking the matching bubbles. First, although the instructions ask the respondent to "zero-fill" and "right-justify," some respondents entered the value
August 2004 Status of Forces Survey of Active-Duty Members: Administration, Datasets, and Codebook
2005-01-01
exercising, and self-reported weight (i.e., underweight or overweight ). 11. Compensation—Present versus alternative retirement pay systems; present...South Africa) nmlkjWestern Hemisphere (e.g., Cuba, Honduras, Peru ) nmlkjOther or not sure BACKGROUND INFORMATION Please select from the list...South Africa) nmlkjWestern Hemisphere (e.g., Cuba, Honduras, Peru ) nmlkjOther or not sure TEMPO, READINESS, AND STRESS Please select from
1986 Proteus Survey: Technical Manual and Codebook
1992-06-01
Officer Candidate School and Direct Commission) and by gender. Female officers were oversampled (30% in the sample versus ap- proximately 16% in the...analyze the effects of this change in policy both on the individual cadets and on the Academy and to study the process of coeducation over four years...Candidate School (OCS), and Direct Commissioning (DC). Approximately 1,000 officers were randomly selected from each commissioning year group 1980-1984 from
Content-based retrieval of historical Ottoman documents stored as textual images.
Saykol, Ediz; Sinop, Ali Kemal; Güdükbay, Ugur; Ulusoy, Ozgür; Cetin, A Enis
2004-03-01
There is an accelerating demand to access the visual content of documents stored in historical and cultural archives. Availability of electronic imaging tools and effective image processing techniques makes it feasible to process the multimedia data in large databases. In this paper, a framework for content-based retrieval of historical documents in the Ottoman Empire archives is presented. The documents are stored as textual images, which are compressed by constructing a library of symbols occurring in a document, and the symbols in the original image are then replaced with pointers into the codebook to obtain a compressed representation of the image. The features in wavelet and spatial domain based on angular and distance span of shapes are used to extract the symbols. In order to make content-based retrieval in historical archives, a query is specified as a rectangular region in an input image and the same symbol-extraction process is applied to the query region. The queries are processed on the codebook of documents and the query images are identified in the resulting documents using the pointers in textual images. The querying process does not require decompression of images. The new content-based retrieval framework is also applicable to many other document archives using different scripts.
Action recognition via cumulative histogram of multiple features
NASA Astrophysics Data System (ADS)
Yan, Xunshi; Luo, Yupin
2011-01-01
Spatial-temporal interest points (STIPs) are popular in human action recognition. However, they suffer from difficulties in determining size of codebook and losing much information during forming histograms. In this paper, spatial-temporal interest regions (STIRs) are proposed, which are based on STIPs and are capable of marking the locations of the most ``shining'' human body parts. In order to represent human actions, the proposed approach takes great advantages of multiple features, including STIRs, pyramid histogram of oriented gradients and pyramid histogram of oriented optical flows. To achieve this, cumulative histogram is used to integrate dynamic information in sequences and to form feature vectors. Furthermore, the widely used nearest neighbor and AdaBoost methods are employed as classification algorithms. Experiments on public datasets KTH, Weizmann and UCF sports show that the proposed approach achieves effective and robust results.
Portrayal of Alcohol Intoxication on YouTube
Primack, Brian A.; Colditz, Jason B.; Pang, Kevin C.; Jackson, Kristina M.
2015-01-01
Background We aimed to characterize the content of leading YouTube videos related to alcohol intoxication and to examine factors associated with alcohol intoxication in videos that were assessed positively by viewers. Methods We systematically captured the 70 most relevant and popular videos on YouTube related to alcohol intoxication. We employed an iterative process to codebook development which resulted in 42 codes in 6 categories: video characteristics, character socio-demographics, alcohol depiction, degree of alcohol use, characteristics associated with alcohol, and consequences of alcohol. Results There were a total of 333,246,875 views for all videos combined. While 89% of videos involved males, only 49% involved females. The videos had a median of 1646 (IQR 300-22,969) “like” designations and 33 (IQR 14-1,261) “dislike” designations each. Liquor was most frequently represented, followed by beer and then wine/champagne. Nearly one-half (44%) of videos contained a brand reference. Humor was juxtaposed with alcohol use in 79% of videos, and motor vehicle use was present in 24%. There were significantly more likes per dislike, indicating more positive sentiment, when there was representation of liquor (29.1 vs. 11.4, p = .008), brand references (32.1 vs. 19.2, p = .04), and/or physical attractiveness (67.5 vs. 17.8, p < .001). Conclusions Internet videos depicting alcohol intoxication are heavily viewed. Nearly half of these videos involve a brand-name reference. While these videos commonly juxtapose alcohol intoxication with characteristics such as humor and attractiveness, they infrequently depict negative clinical outcomes. The popularity of this site may provide an opportunity for public health intervention. PMID:25703135
Proteus Survey: Technical Manual and Codebook
1992-06-01
Reserve Officer Training Corp, 20% from Officer Candidate School and Direct Commission) and by gender . Female officers were supposed to be oversampled...stratified by source of commission (40% USMA, 40% ROTC, 20% OCS and DC) and by gender , as in previous years. A total of approximately 7,000 surveys were...for the 1987 Proteus Survey and the population percentages from the OLRDB for each of the key strata ( gender and source of commission) reportedly used
Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544
Medical image compression based on vector quantization with variable block sizes in wavelet domain.
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.
Application of a VLSI vector quantization processor to real-time speech coding
NASA Technical Reports Server (NTRS)
Davidson, G.; Gersho, A.
1986-01-01
Attention is given to a working vector quantization processor for speech coding that is based on a first-generation VLSI chip which efficiently performs the pattern-matching operation needed for the codebook search process (CPS). Using this chip, the CPS architecture has been successfully incorporated into a compact, single-board Vector PCM implementation operating at 7-18 kbits/sec. A real time Adaptive Vector Predictive Coder system using the CPS has also been implemented.
ERIC Educational Resources Information Center
Tourangeau, Karen; Nord, Christine; Lê, Thanh; Wallner-Allen, Kathleen; Vaden-Kiernan, Nancy; Blaker, Lisa; Najarian, Michelle
2017-01-01
This manual provides guidance and documentation for users of the longitudinal kindergarten-second grade (K-2) data file of the Early Childhood Longitudinal Study, Kindergarten Class of 2010-11 (ECLS-K:2011). It mainly provides information specific to the second-grade rounds of data collection. Users should refer to the "Early Childhood…
2016-05-01
and Kroeger (2002) provide details on sampling and weighting. Following the summary of the survey methodology is a description of the survey analysis... description of priority, for the ADDRESS file). At any given time, the current address used corresponded to the address number with the highest priority...types of address updates provided by the postal service. They are detailed below; each includes a description of the processing steps. 1. Postal Non
2010-12-01
education , time at sea, and field exercises/alerts. 10. In the past 12 months, how many nights have you been away from your permanent duty station...nmlkj nmlkj b. A friend? nmlkj nmlkj nmlkj c. A family member (e.g., parent , brother/sister)? nmlkj nmlkj nmlkj d. A chaplain...nmlkj nmlkj b. A friend? nmlkj nmlkj nmlkj c. A family member (e.g., parent , brother/sister)? nmlkj nmlkj nmlkj d. A chaplain
ERIC Educational Resources Information Center
Tourangeau, Karen; Nord, Christine; Lê, Thanh; Wallner-Allen, Kathleen; Vaden-Kiernan, Nancy; Blaker, Lisa; Najarian, Michelle
2018-01-01
This manual provides guidance and documentation for users of the longitudinal kindergarten-fourth grade (K-4) data file of the Early Childhood Longitudinal Study, Kindergarten Class of 2010-11 (ECLS-K:2011). It mainly provides information specific to the fourth-grade round of data collection. The first chapter provides an overview of the…
1999 Survey of Active Duty Personnel: Administration, Datasets, and Codebook
2000-12-01
but left key items blank. These surveys were treated as nonrespondents when at least one item in each of the Questions 39, 50, and 52 were not...be useable, a questionnaire had to have at least one item in each of the questions 39, 50 and 52 answered. 27 These two subgroups of records are...1988). Mail survey response rate: A meta-analysis of selected techniques for inducing response. Public Opinion Quarterly, 52 , 467-491. Francisco, C
2015-12-01
priorities. Cross points out that even the definition of music itself is somewhat subjective, as sound, rhythm, melody, and even body movement ...to disrupt the conditions that allow a violent enemy to develop . The literature indicates that music is a universal social phenomenon. Music is a...Upon viewing the sample films and videos and listening to the music samples, a set of codes was developed and organized into a codebook in order
2011-04-01
from the survey litho code list if a survey form was sent or independently if only a letter was sent. Ticket Numbers for Web Survey Access Prior...variables BATCH, SERIAL, and LITHO uniquely identify each returned survey. LITHO is the lithocode scanned from the survey. BATCH and SERIAL are the...Uned 593 LEADERSAT Tabs: Leadership Satisfaction Scale- Q11 176 LITHO * Litho code 1086 MAILTYP* Mail Type 1087 MENTOR 12. [12] Do you have a
ERIC Educational Resources Information Center
Tourangeau, Karen; Nord, Christine; Lê, Thanh; Wallner-Allen, Kathleen; Hagedorn, Mary C.; Leggitt, John; Najarian, Michelle
2015-01-01
This manual provides guidance and documentation for users of the longitudinal kindergarten-first grade (K-1) data file of the Early Childhood Longitudinal Study, Kindergarten Class of 2010-11 (ECLS-K:2011). It mainly provides information specific to the first-grade rounds of data collection. Data for the ECLS-K:2011 are released in both a…
April 2006 Status of Forces Survey of Active-Duty Members: Administration, Datasets, and Codebook
2006-08-01
be gathered on a particular question. For example, AD080SP is a flag variable indicating when respondents had another reason for a physical injury...94. [94---] Collge credits since enlistment 510 COLCREDF* Top coding flag for COLCRED 1028 COLCREDR Rec COLCRED-Lvl col cred since enlisting 256...homew 507 COMPEDSK [92SK-] Use home PC/onlin dstnce ed 508 COMPSPEDSK [93SK-] Sp Use home PC onlin dstnce 509 COLCRED 94. [94---] Collge credits
Service Academy 2006 Gender Relations Survey: Administration, Datasets, and Codebook
2006-07-01
you without your consent Someone repeatedly showed you pornographic materials, even after you asked him/her not to Considered neither sexual...consent? 1 2 3 4 c. Someone repeatedly showed you pornographic materials, even after you asked him/her not to? 1 2 3...SC006C 6c. Know- Pornographic materials SC006D 6d. Know-Rumors about your sex beh Know-Little drunk, forced sex SC006E 6e. SC006F 6f. Know-Ruin
August 2006 Status of Forces Survey of Active Duty Members: Administration, Datasets, and Codebook
2006-12-01
beverages on any one occasion? Yes No f. Posters g. Web sites h. Brochures i. Other ATTITUDES...retired" f. Posters NOT [SRSVC1] = "None, I have separated or retired" g. Web sites NOT [SRSVC1] = "None...AL115E* 115e. [115e] Drinking info: news 419 AL115ER Recode AL115ER:common denom 159 AL115F* 115f. [115f] Drinking info: posters 420 AL115FR Recode
Lu, Jiwen; Erin Liong, Venice; Zhou, Jie
2017-08-09
In this paper, we propose a simultaneous local binary feature learning and encoding (SLBFLE) approach for both homogeneous and heterogeneous face recognition. Unlike existing hand-crafted face descriptors such as local binary pattern (LBP) and Gabor features which usually require strong prior knowledge, our SLBFLE is an unsupervised feature learning approach which automatically learns face representation from raw pixels. Unlike existing binary face descriptors such as the LBP, discriminant face descriptor (DFD), and compact binary face descriptor (CBFD) which use a two-stage feature extraction procedure, our SLBFLE jointly learns binary codes and the codebook for local face patches so that discriminative information from raw pixels from face images of different identities can be obtained by using a one-stage feature learning and encoding procedure. Moreover, we propose a coupled simultaneous local binary feature learning and encoding (C-SLBFLE) method to make the proposed approach suitable for heterogeneous face matching. Unlike most existing coupled feature learning methods which learn a pair of transformation matrices for each modality, we exploit both the common and specific information from heterogeneous face samples to characterize their underlying correlations. Experimental results on six widely used face datasets are presented to demonstrate the effectiveness of the proposed method.
Appalachian Women’s Perceptions of Their Community’s Health Threats
Schoenberg, Nancy E.; Hatcher, Jennifer; Dignan, Mark B.
2011-01-01
Context Decades of behavioral research suggest that awareness of health threats is a necessary precursor to engage in health promotion and disease prevention, findings that can be extended to the community level. Purpose We sought to better understand local perspectives on the main health concerns of rural Appalachian communities in order to identify the key health priorities. While Kentucky Appalachian communities are often described as suffering from substandard health, resource, and socioeconomic indicators, strong traditions of community mobilization make possible positive, home-grown change. Methods To assess what women, the key health gatekeepers, perceive as the most significant health threats to their rural communities, 10 focus groups were held with 52 Appalachian women from diverse socioeconomic backgrounds. Tape-recorded narratives were content analyzed and a codebook was developed. Measures designed to increase data trustworthiness included member checks, negative case evidence, and multiple coding. Findings The following rank-ordered conditions emerged as posing the greatest threat to the health of rural Appalachian communities: (1) drug abuse/medication dependence; (2) cancer; (3) heart disease and diabetes (tied); (4) smoking; (5) poor diet/overweight; (6) lack of exercise; and (7) communicable diseases. These health threats were described as specific to the local environment, deriving from broad ecological problems and were connected to one another. Conclusion Drawing on participants’ community-relevant suggestions, we suggest ways in which rural communities may begin to confront these health concerns. These suggestions range from modest, individual-level changes to broader structural-level recommendations. PMID:18257874
NASA Astrophysics Data System (ADS)
Chan, Hau P.; Bao, Nai-Keng; Kwok, Wing O.; Wong, Wing H.
2002-04-01
The application of Digital Pixel Hologram (DPH) as anti-counterfeiting technology for products such as commercial goods, credit cards, identity cards, paper money banknote etc. is growing important nowadays. It offers many advantages over other anti-counterfeiting tools and this includes high diffraction effect, high resolving power, resistance to photo copying using two-dimensional Xeroxes, potential for mass production of patterns at a very low cost. Recently, we have successfully in fabricating high definition DPH with resolution higher than 2500dpi for the purpose of anti-counterfeiting by applying modern optical diffraction theory to computer pattern generation technique with the assist of electron beam lithography (EBL). In this paper, we introduce five levels of encryption techniques, which can be embedded in the design of such DPHs to further improve its anti-counterfeiting performance with negligible added on cost. The techniques involved, in the ascending order of decryption complexity, are namely Gray-level Encryption, Pattern Encryption, Character Encryption, Image Modification Encryption and Codebook Encryption. A Hong Kong Special Administration Regions (HKSAR) DPH emblem was fabricated at a resolution of 2540dpi using the facilities housed in our Optoelectronics Research Center. This emblem will be used as an illustration to discuss in details about each encryption idea during the conference.
2000-12-01
A SKIP FLAG INDICATING THE RESULT OF CHECKING THE RESPONSE ON THE PARENT (SCREENING) ITEM AGAINST THE RESPONSE(S) ON THE ITEMS WITHIN THE SKIP...RESPONSE ON THE PARENT (SCREENING) ITEM AGAINST THE RESPONSE(S) ON THE ITEMS WITHIN THE SKIP PATTERN. SEE TABLE D-5, NOTE 2, IN APPENDIX D. G-52...RESULT OF CHECKING THE RESPONSE ON THE PARENT (SCREENING) ITEM AGAINST THE RESPONSE(S) ON THE ITEMS WITHIN THE SKIP PATTERN. SEE TABLE D-5
1986-05-01
CHECKED 100 I 1.9 I 1 ICHECKED - I NEVER RECEIVED A RESPONSE IN THE I I IMAIL FROM THE CARD I SENT IN 516 1 100.0 1 TOTALSI I I A I B I C 1 11982 l1983...I . NO RESPONSE 2626 I 49.4 I C IVALID SKIP 2369 I 44.6 I 0 INOT CHECKED 84 I 1 .6 I 1 ICHECKED - I NEVER RECEIVED A RESPONSE IN THE I I IMAIL TO MY
The 1984 ARI Survey of Army Recruits. Codebook for Summer 84 Active Army Survey Respondents
1986-05-01
ARMY SURVEY RESPONDENTS T261 - DO YOU HATCH ANY OF THE FOLLOWING PROGRAMS OR PROGRAMMING TYPES ON TV? - NBA BASKETBALL . RAN DATA ICARD i1 COLS ILENGTHII... BASKETBALL 280 T262 WATCH TV PROG:COLLEGE BASKETBALL 281 T263 WATCH TV PROG:NHL HOCKEY 282 T264 WATCH TV PROG:PROFESSIONAL WRESTLING 283 T265 WATCH TV...SURVEY RESPONDENTS T262 - DO YOU HATCH ANY OF THE FOLLOWING PROGRAMS OR PROGRAMMING TYPES ON TV? - COLLEGE BASKETBALL . RAW DATA ICARD #1 COLS ILENGTHII
2013-11-04
banners, etc.) 1950 1.8 5 5 Posters , brochures and/or stickers 9511 8.8 6 6 Unit 680 0.6 7 7 Chaplain 2148 2.0 8 8 Other 108478 100.0 TOTALS...service announcement 411 0.4 3 3 Print advertisement 688 0.6 4 4 Online media (e.g., website, blog, banners, etc.) 1950 1.8 5 5 Posters , brochures...Safe Helpline? Posters , brochures and/or stickers OS DATA SAS DATA COLS LENGTH FORMAT NAME TYPE LENGTH INFORMAT NA-NA NA MARKED NUM 3 STDOS2
Exploring the physical layer frontiers of cellular uplink: The Vienna LTE-A Uplink Simulator.
Zöchmann, Erich; Schwarz, Stefan; Pratschner, Stefan; Nagel, Lukas; Lerch, Martin; Rupp, Markus
Communication systems in practice are subject to many technical/technological constraints and restrictions. Multiple input, multiple output (MIMO) processing in current wireless communications, as an example, mostly employs codebook-based pre-coding to save computational complexity at the transmitters and receivers. In such cases, closed form expressions for capacity or bit-error probability are often unattainable; effects of realistic signal processing algorithms on the performance of practical communication systems rather have to be studied in simulation environments. The Vienna LTE-A Uplink Simulator is a 3GPP LTE-A standard compliant MATLAB-based link level simulator that is publicly available under an academic use license, facilitating reproducible evaluations of signal processing algorithms and transceiver designs in wireless communications. This paper reviews research results that have been obtained by means of the Vienna LTE-A Uplink Simulator, highlights the effects of single-carrier frequency-division multiplexing (as the distinguishing feature to LTE-A downlink), extends known link adaptation concepts to uplink transmission, shows the implications of the uplink pilot pattern for gathering channel state information at the receiver and completes with possible future research directions.
Variability and Limits of US State Laws Regulating Workplace Wellness Programs.
Pomeranz, Jennifer L; Garcia, Andrea M; Vesprey, Randy; Davey, Adam
2016-06-01
We examined variability in state laws related to workplace wellness programs for public and private employers. We conducted legal research using LexisNexis and Westlaw to create a master list of US state laws that existed in 2014 dedicated to workplace wellness programs. The master list was then divided into laws focusing on public employers and private employers. We created 2 codebooks to describe the variables used to examine the laws. Coders used LawAtlas(SM) Workbench to code the laws related to workplace wellness programs. Thirty-two states and the District of Columbia had laws related to workplace wellness programs in 2014. Sixteen states and the District of Columbia had laws dedicated to public employers, and 16 states had laws dedicated to private employers. Nine states and the District of Columbia had laws that did not specify employer type. State laws varied greatly in their methods of encouraging or shaping wellness program requirements. Few states have comprehensive requirements or incentives to support evidence-based workplace wellness programs.
U.S. hookah tobacco smoking establishments advertised on the internet.
Primack, Brian A; Rice, Kristen R; Shensa, Ariel; Carroll, Mary V; DePenna, Erica J; Nakkash, Rima; Barnett, Tracey E
2012-02-01
Establishments dedicated to hookah tobacco smoking recently have proliferated and helped introduce hookah use to U.S. communities. To conduct a comprehensive, qualitative assessment of websites promoting these establishments. In June 2009, a systematic search process was initiated to access the universe of websites representing major hookah tobacco smoking establishments. In 2009-2010, codebook development followed an iterative paradigm involving three researchers and resulted in a final codebook consisting of 36 codes within eight categories. After two independent coders had nearly perfect agreement (Cohen's κ = 0.93) on double-coding the data in the first 20% of sites, the coders divided the remaining sites and coded them independently. A thematic approach to the synthesis of findings and selection of exemplary quotations was used. The search yielded a sample of 144 websites originating from states in all U.S. regions. Among the hookah establishments promoted on the websites, 79% served food and 41% served alcohol. Of the websites, none required age verification, <1% included a tobacco-related warning on the first page, and 4% included a warning on any page. Although mention of the word tobacco was relatively uncommon (appearing on the first page of only 26% sites and on any page of 58% of sites), the promotion of flavorings, pleasure, relaxation, product quality, and cultural and social aspects of hookah smoking was common. Websites may play a role in enhancing or propagating misinformation related to hookah tobacco smoking. Health education and policy measures may be valuable in countering this misinformation. Copyright © 2012 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
A Multiple-Label Guided Clustering Algorithm for Historical Document Dating and Localization.
He, Sheng; Samara, Petros; Burgers, Jan; Schomaker, Lambert
2016-11-01
It is of essential importance for historians to know the date and place of origin of the documents they study. It would be a huge advancement for historical scholars if it would be possible to automatically estimate the geographical and temporal provenance of a handwritten document by inferring them from the handwriting style of such a document. We propose a multiple-label guided clustering algorithm to discover the correlations between the concrete low-level visual elements in historical documents and abstract labels, such as date and location. First, a novel descriptor, called histogram of orientations of handwritten strokes, is proposed to extract and describe the visual elements, which is built on a scale-invariant polar-feature space. In addition, the multi-label self-organizing map (MLSOM) is proposed to discover the correlations between the low-level visual elements and their labels in a single framework. Our proposed MLSOM can be used to predict the labels directly. Moreover, the MLSOM can also be considered as a pre-structured clustering method to build a codebook, which contains more discriminative information on date and geography. The experimental results on the medieval paleographic scale data set demonstrate that our method achieves state-of-the-art results.
2005 Service Academies: Sexual Assault Survey: Administration, Datasets and Codebook
2005-10-01
sex w/you-Uned 326 SB026SK* [26sk] Situation w/ greatest eff -Skip 204 SB027* 27. [27---] In which semester did this occur 205 SB027U* [27...Tab recode SB038CR: Commis Officer COC 669 SB038CU* [38c] Retal by officer in chain o c-Uned 370 SB038D* 38d. [38d] Retal by other academy...sex w/you 203 SB026SK [26sk] Situation w/ greatest eff -Skip 204 SB027 27. [27---] In which semester did this occur 205 SB028 28. [28---] Where
The 1984 ARI Survey of Army Recruits: Codebook for Summer 84 USAR and ARNG Survey Respondents
1986-05-01
THE FOLLOWING PROGRAMS OR PROGRAMMING TYPES ON TV: NBA BASKETBALL . RAW DATA ICARD #I COLS ILENGTHI I _ _ _ I _ _ I _ _ _ I05 0-2-043 20I __ I I SAS...LEAG BASEBALL REG SEAS 249 T259 WATCH TV PROG:MJR LEAG BASEBALL PLAYOFFS 250 T260 WATCH TV PROG:WORLD SERIES 251 V T261 WATCH TV PROG:NBA BASKETBALL 252...T262 WATCH TV PROG:COLLEGE BASKETBALL 253 T263 WATCH TV PROG:NHL HOCKEY 254 T264 WATCH TV PROG:PROFESSIONAL WRESTLING 255 T265 WATCH TV PROG:CAR RACES
Learning Rotation-Invariant Local Binary Descriptor.
Duan, Yueqi; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie
2017-08-01
In this paper, we propose a rotation-invariant local binary descriptor (RI-LBD) learning method for visual recognition. Compared with hand-crafted local binary descriptors, such as local binary pattern and its variants, which require strong prior knowledge, local binary feature learning methods are more efficient and data-adaptive. Unlike existing learning-based local binary descriptors, such as compact binary face descriptor and simultaneous local binary feature learning and encoding, which are susceptible to rotations, our RI-LBD first categorizes each local patch into a rotational binary pattern (RBP), and then jointly learns the orientation for each pattern and the projection matrix to obtain RI-LBDs. As all the rotation variants of a patch belong to the same RBP, they are rotated into the same orientation and projected into the same binary descriptor. Then, we construct a codebook by a clustering method on the learned binary codes, and obtain a histogram feature for each image as the final representation. In order to exploit higher order statistical information, we extend our RI-LBD to the triple rotation-invariant co-occurrence local binary descriptor (TRICo-LBD) learning method, which learns a triple co-occurrence binary code for each local patch. Extensive experimental results on four different visual recognition tasks, including image patch matching, texture classification, face recognition, and scene classification, show that our RI-LBD and TRICo-LBD outperform most existing local descriptors.
A constrained joint source/channel coder design and vector quantization of nonstationary sources
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Chen, Y. C.; Nori, S.; Araj, A.
1993-01-01
The emergence of broadband ISDN as the network for the future brings with it the promise of integration of all proposed services in a flexible environment. In order to achieve this flexibility, asynchronous transfer mode (ATM) has been proposed as the transfer technique. During this period a study was conducted on the bridging of network transmission performance and video coding. The successful transmission of variable bit rate video over ATM networks relies on the interaction between the video coding algorithm and the ATM networks. Two aspects of networks that determine the efficiency of video transmission are the resource allocation algorithm and the congestion control algorithm. These are explained in this report. Vector quantization (VQ) is one of the more popular compression techniques to appear in the last twenty years. Numerous compression techniques, which incorporate VQ, have been proposed. While the LBG VQ provides excellent compression, there are also several drawbacks to the use of the LBG quantizers including search complexity and memory requirements, and a mismatch between the codebook and the inputs. The latter mainly stems from the fact that the VQ is generally designed for a specific rate and a specific class of inputs. In this work, an adaptive technique is proposed for vector quantization of images and video sequences. This technique is an extension of the recursively indexed scalar quantization (RISQ) algorithm.
Goto, Keiko; Ominami, Chihiro; Song, Chunyan; Murayama, Nobuko; Wolff, Cindy
2014-03-01
The current study examined parental perceptions of sociocultural factors associated with healthy child feeding practices among parents of preschool-age children in rural Japan. Fifteen Japanese mothers of preschool-age children participated in this qualitative study. These participants were aged 22-39 years and resided in a rural town in western Japan. We conducted semi-structured qualitative interviews to assess parental perceptions of healthy child feeding practices and their relationships with globalization and localization. These interviews were transcribed, translated into English and coded, based on the principles of grounded theory. A codebook was developed and pre-identified, and the newly-identified themes from this codebook were examined and compared. Overall, local and seasonal foods, along with traditional Japanese foods and simple foods (soshoku), were considered to be beneficial for children. Participants also noted that children were expected to be mindful and exhibit good table manners that reflect cultural values related to meal-time socializing or family bonding, and food appreciation. On the other hand, the majority of the participants stated that foods containing food additives and imported foods were unsuitable for children. Participants noted that strong social capital, especially social support from their mothers or mothers-in-law, as well as social networks for obtaining fresh local foods, contributed to healthy child feeding practices. Cultural capital (including the preservation of traditional Japanese dietary habits, eating rules and inter-generational commensality), was also identified as being key to healthy feeding practices. Identifying and promoting the social and cultural capital that positively support healthy child feeding practices may be an important component of nutrition education programs.
Ash, Tayla; Agaronov, Alen; Young, Ta'Loria; Aftosmes-Tobio, Alyssa; Davison, Kirsten K
2017-08-24
A wide range of interventions has been implemented and tested to prevent obesity in children. Given parents' influence and control over children's energy-balance behaviors, including diet, physical activity, media use, and sleep, family interventions are a key strategy in this effort. The objective of this study was to profile the field of recent family-based childhood obesity prevention interventions by employing systematic review and quantitative content analysis methods to identify gaps in the knowledge base. Using a comprehensive search strategy, we searched the PubMed, PsycIFO, and CINAHL databases to identify eligible interventions aimed at preventing childhood obesity with an active family component published between 2008 and 2015. Characteristics of study design, behavioral domains targeted, and sample demographics were extracted from eligible articles using a comprehensive codebook. More than 90% of the 119 eligible interventions were based in the United States, Europe, or Australia. Most interventions targeted children 2-5 years of age (43%) or 6-10 years of age (35%), with few studies targeting the prenatal period (8%) or children 14-17 years of age (7%). The home (28%), primary health care (27%), and community (33%) were the most common intervention settings. Diet (90%) and physical activity (82%) were more frequently targeted in interventions than media use (55%) and sleep (20%). Only 16% of interventions targeted all four behavioral domains. In addition to studies in developing countries, racial minorities and non-traditional families were also underrepresented. Hispanic/Latino and families of low socioeconomic status were highly represented. The limited number of interventions targeting diverse populations and obesity risk behaviors beyond diet and physical activity inhibit the development of comprehensive, tailored interventions. To ensure a broad evidence base, more interventions implemented in developing countries and targeting racial minorities, children at both ends of the age spectrum, and media and sleep behaviors would be beneficial. This study can help inform future decision-making around the design and funding of family-based interventions to prevent childhood obesity.
Multi-sensor physical activity recognition in free-living.
Ellis, Katherine; Godbole, Suneeta; Kerr, Jacqueline; Lanckriet, Gert
Physical activity monitoring in free-living populations has many applications for public health research, weight-loss interventions, context-aware recommendation systems and assistive technologies. We present a system for physical activity recognition that is learned from a free-living dataset of 40 women who wore multiple sensors for seven days. The multi-level classification system first learns low-level codebook representations for each sensor and uses a random forest classifier to produce minute-level probabilities for each activity class. Then a higher-level HMM layer learns patterns of transitions and durations of activities over time to smooth the minute-level predictions. [Formula: see text].
User-centered Design of the eyeGuide: A Tailored Glaucoma Behavior Change Program.
Killeen, Olivia J; MacKenzie, Chamisa; Heisler, Michele; Resnicow, Ken; Lee, Paul P; Newman-Casey, Paula Anne
2016-10-01
We employed user-centered design to refine a prototype of the eyeGuide, a novel, tailored behavior change program intended to improve medication adherence among glaucoma patients. Glaucoma patients age 40 years and above prescribed ≥1 glaucoma medication were included. The eyeGuide consists of tailored educational content and tailored testimonials in which patients share how they were able to overcome barriers to improve their medication adherence. A hybrid of semistructured diagnostic and pretesting interviews were used to refine the content of the eyeGuide. Purposeful sampling was used to recruit a study population representative of the glaucoma patient population. Interviews were conducted until thematic saturation was reached. Interviews were audiorecorded and transcribed verbatim. Three researchers analyzed the transcripts, generated a codebook, and identified key themes using NVivo 10.0 to further refine the eyeGuide. Twenty-one glaucoma patients were interviewed; mean age 72±12.4 years, 5 (24%) African Americans, 9 (43%) with poor self-reported adherence, 10 (47.6%) age 75 years and above, 10 (47.6%) with poor vision, and 9 (42.9%) women. Qualitative analysis identified 5 important themes for improving glaucoma self-management: social support, patient-provider relationship, medication routine, patients' beliefs about disease and treatment, and eye drop instillation. All participants expressed satisfaction with in-person delivery of the eyeGuide and preferred this to a Web-based module. Participant feedback resulted in revised content. User-centered design generated improvements in the eyeGuide that would not have been possible without patient input. Participants expressed satisfaction with the tailored content.
Squitieri, Lee; Larson, Bradley P.; Chang, Kate W-C; Yang, Lynda J-S.; Chung, Kevin C.
2016-01-01
Background Elective surgical management of neonatal brachial plexus palsy is complex, variable, and often individualized. Little is known about the medical decision-making process among adolescents with NBPP and their families faced with making complex treatment decisions. The experiences of these patients and their parents were analyzed to identify key factors in the decision-making process. Patients and Methods Eighteen adolescents with residual NBPP deficits between the ages of 10 to 17 years along with their parents were included in the present study. A qualitative research design was employed involving the use of separate one hour, in person, semi-structured interviews, which were audio recorded and transcribed. Grounded theory was applied by two independent members of the research team to identify recurrent themes and ultimately create a codebook that was then applied to the data. Results Medical decision-making among adolescents with NBPP and their families is multifaceted and individualized, comprised of both patient and system dependent factors. Four codes pertaining to the medical decision-making process were identified: 1) knowledge acquisition, 2) multidisciplinary care, 3) adolescent autonomy, and 4) patient expectations and treatment desires. Overall, parental decision-making was heavily influenced by system dependent factors, while adolescents largely based their medical decision-making on individual treatment desires to improve function and/or aesthetics. Conclusions There are many areas for improving the delivery of information and health care organization among adolescents with NBPP and their families. We recommend the development of educational interdisciplinary programs and decision aids containing evidence-based management guidelines targeted toward primary care providers and patients. We believe that a computer-based learning module may provide the best avenue to achieve maximum penetrance and convenience of information sharing. PMID:23714810
1988-07-01
0.1 400 SELF-DEVELOPMENT 176 4.8 420 DEV SELF-CONFID 15 0.4 430 DEV MATURE PERSN a 0.2 440 DEV POTENTIAL 367 10.0 450 ADVTG OVER COLLG 23 0.6 471 DEV...OVER COLLG 22 0.6 471 DEV DISCIPLINE it 0.3 481 DEVELOP PRIDE 7 0.2 500 MONEY/BENEFITS 116 3.2 600 EDUC/BENEFITS 279 7.6 700 TRAVEL 43 1.2 821 ADVENTURE...OVER COLLG 9 0.2 460 WRK HITRAIN PEOP 1 0.0 471 DEV DISCIPLINE 7 0.2 491 DEVELOP PRIDE 4 0.1 500 MONEY/BENEFITS 40 1.1 600 EDUC/BENEFITS 90 2.5 700
1986-05-01
I 1 ICHECKED -IN OTHER PARTS OF A NEWSPAPER 1290 --- _ _ _ _I__ _ _ _ __ __ _ __ _ __ _ __ 120I 100.0 TTA I 1 1 A I B I C I D T 11982 1198317 IN I__...FREQ I PERCENT I VALUE I MEANING ___ I _ _ _ I _ _ _ __ __1_ __ _ _ _ _ _8 1 0.6 1 . I NO RESPONSE 768 1 59.5 1 0 1 NOT CHECKED 514 I39.8 I 1 ICHECKED ...27 1 2.1 1 1 ICHECKED - BOOKLET ABOUT ARMY COLLEGE FUND 1290 1 100.0 I T-OTALSI TO CHECK FOR OUT OF RANGE VALUES FOR THIS ’MARK ALL THAT APPLY
The 1986 ARI Survey of U.S. Army Recruits: Codebook for Active Army Survey Respondents
1987-04-01
VIEWING NBA - REG SEASON GAMES 0562 153S Y067 REG TV VIEWING NBA PLAYOFFS 0563 153T Y068 REG TV VIEWING NEWHART 0564 153U Y069 REG TV VIEWING WHO’S THE...BSKTBL-REG SEASON 0560 Y065 53Q REG TV VIEWING NCAA BSKTBL PLAYOFFS 0561 YO66 53R REG TV VIEWING NBA - REG SEASON GAMES 0562 Y067 53S REG TV VIEWING... NBA PLAYOFFS 0563 YO68 53T REG TV VIEWING NEWHART 0564 Y069 53U REG TV VIEWING WHO’S THE BOSS 0565 Y070 53V REG TV VIEWING THE COSBY SHOW 0566
Horner, Pilar; Sanchez, Ninive; Castillo, Marcela; Delva, Jorge
2012-06-01
To obtain rich information about how adult Latinos living in high-poverty/high-drug use neighborhoods perceive and negotiate their environment. In 2008, 13 adult caregivers in Santiago, Chile, were interviewed with open-ended questions to ascertain beliefs about neighborhood effects and drug use. Inductive analysis was used to develop the codebook/identify trends. Residents externalized their understanding of drug use and misuse by invoking the concept of delinquent youth. A typology of their perceptions is offered. Learning more about residents' circumstances may help focus on needs-based interventions. More research with Latino neighborhoods is needed for culturally competent models of interventions.
Aggarwal, Neil K; Desilva, Ravi; Nicasio, Andel V; Boiler, Marit; Lewis-Fernández, Roberto
2015-01-01
Cross-cultural mental health researchers often analyze patient explanatory models of illness to optimize service provision. The Cultural Formulation Interview (CFI) is a cross-cultural assessment tool released in May 2013 with DSM-5 to revise shortcomings from the DSM-IV Outline for Cultural Formulation (OCF). The CFI field trial took place in 6 countries, 14 sites, and with 321 patients to explore its feasibility, acceptability, and clinical utility with patients and clinicians. We sought to analyze if and how CFI feasibility, acceptability, and clinical utility were related to patient-clinician communication. We report data from the New York site which enrolled 7 clinicians and 32 patients in 32 patient-clinician dyads. We undertook a data analysis independent of the parent field trial by conducting content analyses of debriefing interviews with all participants (n = 64) based on codebooks derived from frameworks for medical communication and implementation outcomes. Three coders created codebooks, coded independently, established inter-rater coding reliability, and analyzed if the CFI affects medical communication with respect to feasibility, acceptability, and clinical utility. Despite racial, ethnical, cultural, and professional differences within our group of patients and clinicians, we found that promoting satisfaction through the interview, eliciting data, eliciting the patient's perspective, and perceiving data at multiple levels were common codes that explained how the CFI affected medical communication. We also found that all but two codes fell under the implementation outcome of clinical utility, two fell under acceptability, and none fell under feasibility. Our study offers new directions for research on how a cultural interview affects patient-clinician communication. Future research can analyze how the CFI and other cultural interviews impact medical communication in clinical settings with subsequent effects on outcomes such as medication adherence, appointment retention, and health condition.
Cherrington, Andrea L.; Andreae, Lynn; Prince, Candice; Holt, Cheryl L.; Halanych, Jewell H.
2015-01-01
Objectives. We qualitatively assessed patients’ perceptions of discrimination and patient satisfaction in the health care setting specific to interactions with nonphysician health care staff. Methods. We conducted 12 focus-group interviews with African American and European American participants, stratified by race and gender, from June to November 2008. We used a topic guide to facilitate discussion and identify factors contributing to perceived discrimination and analyzed transcripts for relevant themes using a codebook. Results. We enrolled 92 participants: 55 African Americans and 37 European Americans, all of whom reported perceived discrimination and lower patient satisfaction as a result of interactions with nonphysician health care staff. Perceived discrimination was associated with 2 main characteristics: insurance or socioeconomic status and race. Both verbal and nonverbal communication style on the part of nonphysician health care staff were related to individuals’ perceptions of how they were treated. Conclusions. The behaviors of nonphysician health care staff in the clinical setting can potentially contribute to patients’ perceptions of discrimination and lowered patient satisfaction. Future interventions to reduce health care discrimination should include a focus on staff cultural competence and customer service skills. PMID:26270291
Improved Speech Coding Based on Open-Loop Parameter Estimation
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Chen, Ya-Chin; Longman, Richard W.
2000-01-01
A nonlinear optimization algorithm for linear predictive speech coding was developed early that not only optimizes the linear model coefficients for the open loop predictor, but does the optimization including the effects of quantization of the transmitted residual. It also simultaneously optimizes the quantization levels used for each speech segment. In this paper, we present an improved method for initialization of this nonlinear algorithm, and demonstrate substantial improvements in performance. In addition, the new procedure produces monotonically improving speech quality with increasing numbers of bits used in the transmitted error residual. Examples of speech encoding and decoding are given for 8 speech segments and signal to noise levels as high as 47 dB are produced. As in typical linear predictive coding, the optimization is done on the open loop speech analysis model. Here we demonstrate that minimizing the error of the closed loop speech reconstruction, instead of the simpler open loop optimization, is likely to produce negligible improvement in speech quality. The examples suggest that the algorithm here is close to giving the best performance obtainable from a linear model, for the chosen order with the chosen number of bits for the codebook.
Graduate medical education competencies for international health electives: A qualitative study.
Nordhues, Hannah C; Bashir, M Usmaan; Merry, Stephen P; Sawatsky, Adam P
2017-11-01
Residency programs offer international health electives (IHEs), providing multiple educational benefits. This study aimed to identify how IHEs fulfill the Accreditation Council for Graduate Medical Education (ACGME) core competencies. We conducted a thematic analysis of post-rotation reflective reports from residents who participated in IHEs through the Mayo International Health Program. We coded reports using a codebook created from the ACGME competencies. Using a constant comparative method, we identified significant themes within each competency. Residents from 40 specialties participated in 377 IHEs in 56 countries from 2001 to 2014. Multiple themes were identified within each of the six ACGME core competencies: Patient Care and Procedural Skills (4), Medical Knowledge (5), Practice-Based Learning and Improvement (3), Interpersonal and Communication Skills (5), Professionalism (4), and Systems-Based Practice and Improvement (3). Themes included improving physical exam and procedural skills, providing care in resource-limited setting, gaining knowledge of tropical and non-tropical diseases, identifying socioeconomic determinants of health, engaging in the education of others, and increasing communication across cultures and multidisciplinary teams. Through IHEs, residents advanced their knowledge, skills, and attitudes in each of the six ACGME competencies. These data can be used for development of IHE competencies and milestones for resident assessment.
Bag-of-visual-ngrams for histopathology image classification
NASA Astrophysics Data System (ADS)
López-Monroy, A. Pastor; Montes-y-Gómez, Manuel; Escalante, Hugo Jair; Cruz-Roa, Angel; González, Fabio A.
2013-11-01
This paper describes an extension of the Bag-of-Visual-Words (BoVW) representation for image categorization (IC) of histophatology images. This representation is one of the most used approaches in several high-level computer vision tasks. However, the BoVW representation has an important limitation: the disregarding of spatial information among visual words. This information may be useful to capture discriminative visual-patterns in specific computer vision tasks. In order to overcome this problem we propose the use of visual n-grams. N-grams based-representations are very popular in the field of natural language processing (NLP), in particular within text mining and information retrieval. We propose building a codebook of n-grams and then representing images by histograms of visual n-grams. We evaluate our proposal in the challenging task of classifying histopathology images. The novelty of our proposal lies in the fact that we use n-grams as attributes for a classification model (together with visual-words, i.e., 1-grams). This is common practice within NLP, although, to the best of our knowledge, this idea has not been explored yet within computer vision. We report experimental results in a database of histopathology images where our proposed method outperforms the traditional BoVWs formulation.
The 1985 Survey of Army Recruits: Codebook for Summer 85 Active Army Survey Respondents. Volume 1
1986-05-01
ICHECKED - ALL SERVICES TOGETHER 7220 I1700 .0 I TO-TALSI INI A I B I C I D IT 11982 11983 11984 1 11 R Q165 I IQ20A5 I9IA- YES WA- 4 -051 THE 1985 ARI... ICHECKED - AN ARMY POST TOUR ACCOMPANIED BY A I I IU.S. ARMY RECRUITER 7220 i-100.0 1T TOTA L-§ N’ A I B I C I D -T 11982 11983 119841 N02 I I...I 13.0 I 1 ICHECKED - A U.S. ARMY SPONSORED OR PRESENTED I I IPROGRAM AT SCHOOL WHERE SOLDIERS DESCRIBE THEIR I I ARMY EXPERIENCES AND DUTIES -7220 I
2002-05-01
Wave 2 Foreign DoD Remail 2 2/5/02 2/6/02 3 0 0 36. Wave 2 Domestic DoD Remail 3 2/8/02 2/11/02 643 128 52 37. Wave 2 Domestic Coast Guard...Foreign DoD Remail 3 3/8/02 3/11/02 2 1 0 50. Wave 3 Domestic DoD Remail 4 3/13/02 3/14/02 312 52 9 51. Wave 3 Domestic Coast Guard Remail 4 3...13/02 3/14/02 14 1 3 52 . Wave 3 Domestic DoD Remail 5 3/26/02 3/27/02 673 107 19 53. Wave 3 Domestic Coast Guard Remail 5 3/26/02 3/27/02 29 3 3
Outcomes That Matter to Teens With Type 1 Diabetes.
Ye, Clara Y; Jeppson, Thor C; Kleinmaus, Ellen M; Kliems, Harald M; Schopp, Jennifer M; Cox, Elizabeth D
2017-06-01
Purpose The purpose of the study was to describe outcomes that matter to teens with type 1 diabetes. Understanding outcomes that matter to teens could support successful interventions to improve diabetes self-management. Methods Fifty publicly available posts published in the "teen" sections of 2 major diabetes online forums between 2011 and 2013 were analyzed using qualitative research methods. From each post, content and descriptive data (eg, duration of diabetes and age) were collected. Two members of the research team independently used open coding techniques to identify outcomes (defined as impacts or consequences of type 1 diabetes) and organized them into themes and subthemes. A codebook was jointly developed to facilitate the identification of meaningful outcomes from the posts. Results Teens' average age was 15.7 years, and the average time since diabetes diagnosis was 6.3 years. The 3 most commonly mentioned outcomes were (1) interactions with peers ("I want to talk to someone who understands"), (2) emotional well-being ("Diabetes makes me want to cry"), and (3) blood glucose management ("My blood sugar never goes down"). Other identified outcomes included (4) physical well-being, (5) education and motivation of others, (6) family interactions, (7) academic achievement, and (8) interactions with important others such as teachers. Conclusions While teens are concerned about control of their blood glucose, there are many other outcomes that matter to them. Health care providers and diabetes educators may want to consider these other outcomes when motivating teens with type 1 diabetes to improve blood glucose control.
Toward Real-Time Infoveillance of Twitter Health Messages.
Colditz, Jason B; Chu, Kar-Hai; Emery, Sherry L; Larkin, Chandler R; James, A Everette; Welling, Joel; Primack, Brian A
2018-06-21
There is growing interest in conducting public health research using data from social media. In particular, Twitter "infoveillance" has demonstrated utility across health contexts. However, rigorous and reproducible methodologies for using Twitter data in public health are not yet well articulated, particularly those related to content analysis, which is a highly popular approach. In 2014, we gathered an interdisciplinary team of health science researchers, computer scientists, and methodologists to begin implementing an open-source framework for real-time infoveillance of Twitter health messages (RITHM). Through this process, we documented common challenges and novel solutions to inform future work in real-time Twitter data collection and subsequent human coding. The RITHM framework allows researchers and practitioners to use well-planned and reproducible processes in retrieving, storing, filtering, subsampling, and formatting data for health topics of interest. Further considerations for human coding of Twitter data include coder selection and training, data representation, codebook development and refinement, and monitoring coding accuracy and productivity. We illustrate methodological considerations through practical examples from formative work related to hookah tobacco smoking, and we reference essential methods literature related to understanding and using Twitter data. (Am J Public Health. Published online ahead of print June 21, 2018: e1-e6. doi:10.2105/AJPH.2018.304497).
Free-Form Region Description with Second-Order Pooling.
Carreira, João; Caseiro, Rui; Batista, Jorge; Sminchisescu, Cristian
2015-06-01
Semantic segmentation and object detection are nowadays dominated by methods operating on regions obtained as a result of a bottom-up grouping process (segmentation) but use feature extractors developed for recognition on fixed-form (e.g. rectangular) patches, with full images as a special case. This is most likely suboptimal. In this paper we focus on feature extraction and description over free-form regions and study the relationship with their fixed-form counterparts. Our main contributions are novel pooling techniques that capture the second-order statistics of local descriptors inside such free-form regions. We introduce second-order generalizations of average and max-pooling that together with appropriate non-linearities, derived from the mathematical structure of their embedding space, lead to state-of-the-art recognition performance in semantic segmentation experiments without any type of local feature coding. In contrast, we show that codebook-based local feature coding is more important when feature extraction is constrained to operate over regions that include both foreground and large portions of the background, as typical in image classification settings, whereas for high-accuracy localization setups, second-order pooling over free-form regions produces results superior to those of the winning systems in the contemporary semantic segmentation challenges, with models that are much faster in both training and testing.
Deep Learning for Lowtextured Image Matching
NASA Astrophysics Data System (ADS)
Kniaz, V. V.; Fedorenko, V. V.; Fomin, N. A.
2018-05-01
Low-textured objects pose challenges for an automatic 3D model reconstruction. Such objects are common in archeological applications of photogrammetry. Most of the common feature point descriptors fail to match local patches in featureless regions of an object. Hence, automatic documentation of the archeological process using Structure from Motion (SfM) methods is challenging. Nevertheless, such documentation is possible with the aid of a human operator. Deep learning-based descriptors have outperformed most of common feature point descriptors recently. This paper is focused on the development of a new Wide Image Zone Adaptive Robust feature Descriptor (WIZARD) based on the deep learning. We use a convolutional auto-encoder to compress discriminative features of a local path into a descriptor code. We build a codebook to perform point matching on multiple images. The matching is performed using the nearest neighbor search and a modified voting algorithm. We present a new "Multi-view Amphora" (Amphora) dataset for evaluation of point matching algorithms. The dataset includes images of an Ancient Greek vase found at Taman Peninsula in Southern Russia. The dataset provides color images, a ground truth 3D model, and a ground truth optical flow. We evaluated the WIZARD descriptor on the "Amphora" dataset to show that it outperforms the SIFT and SURF descriptors on the complex patch pairs.
Panchapagesan, Sankaran; Alwan, Abeer
2011-01-01
In this paper, a quantitative study of acoustic-to-articulatory inversion for vowel speech sounds by analysis-by-synthesis using the Maeda articulatory model is performed. For chain matrix calculation of vocal tract (VT) acoustics, the chain matrix derivatives with respect to area function are calculated and used in a quasi-Newton method for optimizing articulatory trajectories. The cost function includes a distance measure between natural and synthesized first three formants, and parameter regularization and continuity terms. Calibration of the Maeda model to two speakers, one male and one female, from the University of Wisconsin x-ray microbeam (XRMB) database, using a cost function, is discussed. Model adaptation includes scaling the overall VT and the pharyngeal region and modifying the outer VT outline using measured palate and pharyngeal traces. The inversion optimization is initialized by a fast search of an articulatory codebook, which was pruned using XRMB data to improve inversion results. Good agreement between estimated midsagittal VT outlines and measured XRMB tongue pellet positions was achieved for several vowels and diphthongs for the male speaker, with average pellet-VT outline distances around 0.15 cm, smooth articulatory trajectories, and less than 1% average error in the first three formants. PMID:21476670
Conducting a multicentre and multinational qualitative study on patient transitions.
Johnson, Julie K; Barach, Paul; Vernooij-Dassen, Myrra
2012-12-01
A multicentre, multinational research study requires careful planning and coordination to accomplish the aims of the study and to ensure systematic and rigorous examination of all project methods and data collected. The aim of this paper is to describe the approach we used during the HANDOVER Project to develop a multicentre, multinational research project for studying transitions of patient care while creating a community of practice for the researchers. We highlight the process used to assure the quality of a multicentre qualitative study and to create a codebook for data analysis as examples of attending to the community of practice while conducting rigorous qualitative research. Essential elements for the success of this multinational, multilanguage research project included recruiting a strong research team, explicit planning for decision-making processes to be used throughout the project, acknowledging the differences among the study settings and planning the protocols to capitalise upon those differences. Although not commonly discussed in reports of large research projects, there is an underlying, concurrent stream of activities to develop a cohesive team that trusts and respects one another's skills and that engage independent researchers in a group process that contributes to achieving study goals. We discuss other lessons learned and offer recommendations for other teams planning multicentre research.
Studying de-implementation in health: an analysis of funded research grants.
Norton, Wynne E; Kennedy, Amy E; Chambers, David A
2017-12-04
Studying de-implementation-defined herein as reducing or stopping the use of a health service or practice provided to patients by healthcare practitioners and systems-has gained traction in recent years. De-implementing ineffective, unproven, harmful, overused, inappropriate, and/or low-value health services and practices is important for mitigating patient harm, improving processes of care, and reducing healthcare costs. A better understanding of the state-of-the-science is needed to guide future objectives and funding initiatives. To this end, we characterized de-implementation research grants funded by the United States (US) National Institutes of Health (NIH) and the Agency for Healthcare Research and Quality (AHRQ). We used systematic methods to search, identify, and describe de-implementation research grants funded across all 27 NIH Institutes and Centers (ICs) and AHRQ from fiscal year 2000 through 2017. Eleven key terms and three funding opportunity announcements were used to search for research grants in the NIH Query, View and Report (QVR) system. Two coders identified eligible grants based on inclusion/exclusion criteria. A codebook was developed, pilot tested, and revised before coding the full grant applications of the final sample. A total of 1277 grants were identified through the QVR system; 542 remained after removing duplicates. After the multistep eligibility assessment and review process, 20 grant applications were coded. Many grants were funded by NIH (n = 15), with fewer funded by AHRQ, and a majority were funded between fiscal years 2015 and 2016 (n = 11). Grant proposals focused on de-implementing a range of health services and practices (e.g., medications, therapies, screening tests) across various health areas (e.g., cancer, cardiovascular disease) and delivery settings (e.g., hospitals, nursing homes, schools). Grants proposed to use a variety of study designs and research methods (e.g., experimental, observational, mixed methods) to accomplish study aims. Based on the systematic portfolio analysis of NIH- and AHRQ-funded research grants over the past 17 years, relatively few have focused on studying the de-implementation of ineffective, unproven, harmful, overused, inappropriate, and/or low-value health services and practices provided to patients by healthcare practitioners and systems. Strategies for raising the profile and growing the field of research on de-implementation are discussed.
Thrasher, Ashley B.; Walker, Stacy E.; Hankemeier, Dorice A.; Pitney, William A.
2015-01-01
Context: Many newly credentialed athletic trainers gain initial employment as graduate assistants (GAs) in the collegiate setting, yet their socialization into their role is unknown. Exploring the socialization process of GAs in the collegiate setting could provide insight into how that process occurs. Objective: To explore the professional socialization of GAs in the collegiate setting to determine how GAs are socialized and developed as athletic trainers. Design: Qualitative study. Setting: Individual phone interviews. Patients or Other Participants: Athletic trainers (N = 21) who had supervised GAs in the collegiate setting for a minimum of 8 years (16 men [76%], 5 women [24%]; years of supervision experience = 14.6 ± 6.6). Data Collection and Analysis: Data were collected via phone interviews, which were recorded and transcribed verbatim. Data were analyzed by a 4-person consensus team with a consensual qualitative-research design. The team independently coded the data and compared ideas until a consensus was reached, and a codebook was created. Trustworthiness was established through member checks and multianalyst triangulation. Results: Four themes emerged: (1) role orientation, (2) professional development and support, (3) role expectations, and (4) success. Role orientation occurred both formally (eg, review of policies and procedures) and informally (eg, immediate role immersion). Professional development and support consisted of the supervisor mentoring and intervening when appropriate. Role expectations included decision-making ability, independent practice, and professionalism; however, supervisors often expected GAs to function as experienced, full-time staff. Success of the GAs depended on their adaptability and on the proper selection of GAs by supervisors. Conclusions: Supervisors socialize GAs into the collegiate setting by providing orientation, professional development, mentoring, and intervention when necessary. Supervisors are encouraged to use these socialization tactics to enhance the professional development of GAs in the collegiate setting. PMID:25347237
Portrayal of alcohol intoxication on YouTube.
Primack, Brian A; Colditz, Jason B; Pang, Kevin C; Jackson, Kristina M
2015-03-01
We aimed to characterize the content of leading YouTube videos related to alcohol intoxication and to examine factors associated with alcohol intoxication in videos that were assessed positively by viewers. We systematically captured the 70 most relevant and popular videos on YouTube related to alcohol intoxication. We employed an iterative process to codebook development which resulted in 42 codes in 6 categories: video characteristics, character socio demographics, alcohol depiction, degree of alcohol use, characteristics associated with alcohol, and consequences of alcohol. There were a total of 333,246,875 views for all videos combined. While 89% of videos involved males, only 49% involved females. The videos had a median of 1,646 (interquartile range [IQR] 300 to 22,969) "like" designations and 33 (IQR 14 to 1,261) "dislike" designations each. Liquor was most frequently represented, followed by beer and then wine/champagne. Nearly one-half (44%) of videos contained a brand reference. Humor was juxtaposed with alcohol use in 79% of videos, and motor vehicle use was present in 24%. There were significantly more likes per dislike, indicating more positive sentiment, when there was representation of liquor (29.1 vs. 11.4, p = 0.008), brand references (32.1 vs. 19.2, p = 0.04), and/or physical attractiveness (67.5 vs. 17.8, p < 0.001). Internet videos depicting alcohol intoxication are heavily viewed. Nearly, half of these videos involve a brand-name reference. While these videos commonly juxtapose alcohol intoxication with characteristics such as humor and attractiveness, they infrequently depict negative clinical outcomes. The popularity of this site may provide an opportunity for public health intervention. Copyright © 2015 by the Research Society on Alcoholism.
Niederdeppe, Jeff; Avery, Rosemary J; Miller, Emily Elizabeth Namaste
2018-05-01
The study identifies the extent to which theoretical constructs drawn from well-established message effect communication theories are reflected in the content of alcohol-related public service announcements (PSAs) airing in the United States over a 16-year period. Content analysis of 18 530 141 alcohol-abuse (AA) and drunk-driving (DD) PSAs appearing on national network and local cable television stations in the 210 largest designated marketing areas (DMAs) from January 1995 through December 2010. The authors developed a detailed content analytic codebook and trained undergraduate coders to reliably identify the extent to which theoretical constructs and other creative ad elements are reflected in the PSAs. We show these patterns using basic descriptive statistics. Although both classes of alcohol-related PSAs used strategies that are consistent with major message effect theories, their specific theoretical orientations differed dramatically. The AA PSAs were generally consistent with constructs emphasized by the Extended Parallel Process Model (EPPM), whereas DD PSAs were more likely to use normative strategies emphasized by the Focus Theory of Narrative Conduct (FTNC) or source credibility appeals central to the Elaboration Likelihood Model. Having identified message content, future research should use deductive approaches to determine if volume and message content of alcohol-control PSAs have an impact on measures of alcohol consumption and/or measures of drunk driving, such as fatalities or driving while intoxicated/driving under the influence arrests.
Lessons from Early Medicaid Expansions Under Health Reform: Interviews with Medicaid Officials
Sommers, Benjamin D; Arntson, Emily; Kenney, Genevieve M; Epstein, Arnold M
2013-01-01
Background The Affordable Care Act (ACA) dramatically expands Medicaid in 2014 in participating states. Meanwhile, six states have already expanded Medicaid since 2010 to some or all of the low-income adults targeted under health reform. We undertook an in-depth exploration of these six “early-expander” states—California, Connecticut, the District of Columbia, Minnesota, New Jersey, and Washington—through interviews with high-ranking Medicaid officials. Methods We conducted semi-structured interviews with 11 high-ranking Medicaid officials in six states and analyzed the interviews using qualitative methods. Interviews explored enrollment outreach, stakeholder involvement, impact on beneficiaries, utilization and costs, implementation challenges, and potential lessons for 2014. Two investigators independently analyzed interview transcripts and iteratively refined the codebook until reaching consensus. Results We identified several themes. First, these expansions built upon pre-existing state-funded insurance programs for the poor. Second, predictions about costs and enrollment were challenging, indicating the uncertainty in projections for 2014. Other themes included greater than anticipated need for behavioral health services in the expansion population, administrative challenges of expansions, and persistent barriers to enrollment and access after expanding eligibility—though officials overall felt the expansions increased access for beneficiaries. Finally, political context—support or opposition from stakeholders and voters—plays a critical role in shaping the success of Medicaid expansions. Conclusions Early Medicaid expansions under the ACA offer important lessons to federal and state policymakers as the 2014 expansions approach. While the context of each state’s expansion is unique, key shared experiences were significant implementation challenges and opportunities for expanding access to needed services. PMID:24834369
Bao, Yuhua; Eggman, Ashley; Richardson, Joshua; Bruce, Martha
2013-01-01
Objective Depression affects one in four older adults receiving home health care. Medicare policies are influential in shaping home health practice. This study aims to identify Medicare policy areas that are aligned or misaligned with depression care quality improvement in home health care. Methods Qualitative study based on semi-structured interviews with nurses and administrators from five home health agencies in five states (n=20). Digitally recorded interviews were transcribed and analyzed using the grounded theory method. A multi-disciplinary team iteratively developed a codebook from interview data to identify themes. Results Several important Medicare policies are largely misaligned with depression care quality improvement in home health care: Medicare eligibility requirements for patients to remain homebound and to demonstrate a need for skilled care restrict nurses’ abilities to follow up with depressed patients for sufficient length of time; the lack of explicit recognition of nursing time and quality of care in the home health Prospective Payment System (PPS) provides misaligned incentives for depression care; incorporation of a two-item depression screening tool in Medicare-mandated comprehensive patient assessment raised clinician awareness of depression; however, inclusion of the tool at Start-of-Care only but not any other follow-up points limits its potential in assisting nurses with depression care management; under-development of clinical decision support for depression care in vendor-developed electronic health records constitutes an important barrier to depression quality improvement in home health care. Conclusions Several influential Medicare policies and regulations for home health practice may be misaligned with evidence-based depression care for home health patients. PMID:24632686
Outcomes That Matter to Teens With Type 1 Diabetes
Ye, Clara Y.; Jeppson, Thor C.; Kleinmaus, Ellen M.; Kliems, Harald M.; Schopp, Jennifer M.; Cox, Elizabeth D.
2017-01-01
Purpose The purpose of the study was to describe outcomes that matter to teens with type 1 diabetes. Understanding outcomes that matter to teens could support successful interventions to improve diabetes self-management. Methods Fifty publicly available posts published in the “teen” sections of 2 major diabetes online forums between 2011 and 2013 were analyzed using qualitative research methods. From each post, content and descriptive data (eg, duration of diabetes and age) were collected. Two members of the research team independently used open coding techniques to identify outcomes (defined as impacts or consequences of type 1 diabetes) and organized them into themes and subthemes. A codebook was jointly developed to facilitate the identification of meaningful outcomes from the posts. Results Teens’ average age was 15.7 years, and the average time since diabetes diagnosis was 6.3 years. The 3 most commonly mentioned outcomes were (1) interactions with peers (“I want to talk to someone who understands”), (2) emotional well-being (“Diabetes makes me want to cry”), and (3) blood glucose management (“My blood sugar never goes down”). Other identified outcomes included (4) physical well-being, (5) education and motivation of others, (6) family interactions, (7) academic achievement, and (8) interactions with important others such as teachers. Conclusions While teens are concerned about control of their blood glucose, there are many other outcomes that matter to them. Health care providers and diabetes educators may want to consider these other outcomes when motivating teens with type 1 diabetes to improve blood glucose control. PMID:28520550
Reigniting Tobacco Ritual: Waterpipe Tobacco Smoking Establishment Culture in the United States
Carroll, Mary V.; Chang, Judy; Sidani, Jaime E.; Barnett, Tracey E.; Soule, Eric; Balbach, Edith
2014-01-01
Introduction: Waterpipe tobacco smoking (WTS) is an increasingly prevalent form of tobacco use in the United States. Its appeal may stem from its social, ritualistic, and aesthetic nature. Our aim in this study was to understand WTS as a social ritual with the goal of informing prevention efforts. Methods: We conducted a covert observational study consisting of 38 observation sessions in 11 WTS establishments in 3 U.S. cities. Data collection was based on an established conceptual framework describing ritualistic elements of tobacco use. Iterative codebook development and qualitative thematic synthesis were used to analyze data. Results: Atmospheres ranged from quiet coffee shop to boisterous bar party environments. While some children and older adults were present, the majority of clientele were young adults. Men and women were evenly represented. However, there were 19 occurrences of a male smoking by himself, but no women smoked alone. The vast majority (94%) of the clientele were actively smoking waterpipes. All 83 observed groups manifested at least 1 of the ritual elements of our conceptual framework, while 41 of the 83 observed groups (49%) demonstrated all 4 ritual elements. Conclusions: Despite its heterogeneity, WTS is often characterized by 1 or more established elements of a tobacco-related social ritual. It may be valuable for clinical and public health interventions to acknowledge and address the ritualistic elements and social function of WTS. PMID:24972889
Shahwan, Shazana; Fauziana, Restria; Satghare, Pratika; Vaingankar, Janhavi; Picco, Louisa; Chong, Siow Ann; Subramaniam, Mythily
2016-01-01
Background Youths are more likely to rebel against messages perceived to inhibit their independence. In order for antismoking campaigns to be effective with this population, adopting evidence-based strategies is crucial. In this study, we examined youths’ reaction to past and ongoing antismoking campaigns, and delineate effective and ineffective components of campaigns as identified by them. Methods 12 focus group discussions were conducted with 91 youth smokers aged 15–29 years. Data were analysed using qualitative content analysis. A codebook was derived through an iterative process. The data were coded systematically by three coders, using Nvivo V.10. Results Fear appeals that had no immediate relevance to youths, and campaigns involving humour or sports/dance activities that distracted youths from the antismoking messages, were deemed ineffective. In contrast, elements identified to be efficacious were: positive tone, low-fear visual images, ‘low-controlling language’ and a genuine spokesperson. Youth tended to favour campaigns circulating on social media platforms. Importantly, youths voiced a lack of tangible support for their efforts to quit smoking. Conclusions Participants expressed a preference towards antismoking messages that were less authoritative, and perceived a distinct lack of support for their intentions to quit smoking. There is room for incorporating suggestions by participants in future antismoking campaigns. Future research is needed to identify barriers to accessing available support. PMID:26944686
Locally adaptive vector quantization: Data compression with feature preservation
NASA Technical Reports Server (NTRS)
Cheung, K. M.; Sayano, M.
1992-01-01
A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process.
Adaptive Precoded MIMO for LTE Wireless Communication
NASA Astrophysics Data System (ADS)
Nabilla, A. F.; Tiong, T. C.
2015-04-01
Long-Term Evolution (LTE) and Long Term Evolution-Advanced (ATE-A) have provided a major step forward in mobile communication capability. The objectives to be achieved are high peak data rates in high spectrum bandwidth and high spectral efficiencies. Technically, pre-coding means that multiple data streams are emitted from the transmit antenna with independent and appropriate weightings such that the link throughput is maximized at the receiver output thus increasing or equalizing the received signal to interference and noise (SINR) across the multiple receiver terminals. However, it is not reliable enough to fully utilize the information transfer rate to fit the condition of channel according to the bandwidth size. Thus, adaptive pre-coding is proposed. It applies pre-coding matrix indicator (PMI) channel state making it possible to change the pre-coding codebook accordingly thus improving the data rate higher than fixed pre-coding.
Comparison of SOM point densities based on different criteria.
Kohonen, T
1999-11-15
Point densities of model (codebook) vectors in self-organizing maps (SOMs) are evaluated in this article. For a few one-dimensional SOMs with finite grid lengths and a given probability density function of the input, the numerically exact point densities have been computed. The point density derived from the SOM algorithm turned out to be different from that minimizing the SOM distortion measure, showing that the model vectors produced by the basic SOM algorithm in general do not exactly coincide with the optimum of the distortion measure. A new computing technique based on the calculus of variations has been introduced. It was applied to the computation of point densities derived from the distortion measure for both the classical vector quantization and the SOM with general but equal dimensionality of the input vectors and the grid, respectively. The power laws in the continuum limit obtained in these cases were found to be identical.
Evaluating an mHealth App for Health and Well-Being at Work: Mixed-Method Qualitative Study
Wiezer, Noortje; Janssen, Joris H; Vink, Peter; Kraaij, Wessel
2018-01-01
Background To improve workers’ health and well-being, workplace interventions have been developed, but utilization and reach are unsatisfactory, and effects are small. In recent years, new approaches such as mobile health (mHealth) apps are being developed, but the evidence base is poor. Research is needed to examine its potential and to assess when, where, and for whom mHealth is efficacious in the occupational setting. To develop interventions for workers that actually will be adopted, insight into user satisfaction and technology acceptance is necessary. For this purpose, various qualitative evaluation methods are available. Objective The objectives of this study were to gain insight into (1) the opinions and experiences of employees and experts on drivers and barriers using an mHealth app in the working context and (2) the added value of three different qualitative methods that are available to evaluate mHealth apps in a working context: interviews with employees, focus groups with employees, and a focus group with experts. Methods Employees of a high-tech company and experts were asked to use an mHealth app for at least 3 weeks before participating in a qualitative evaluation. Twenty-two employees participated in interviews, 15 employees participated in three focus groups, and 6 experts participated in one focus group. Two researchers independently coded, categorized, and analyzed all quotes yielded from these evaluation methods with a codebook using constructs from user satisfaction and technology acceptance theories. Results Interviewing employees yielded 785 quotes, focus groups with employees yielded 266 quotes, and the focus group with experts yielded 132 quotes. Overall, participants muted enthusiasm about the app. Combined results from the three evaluation methods showed drivers and barriers for technology, user characteristics, context, privacy, and autonomy. A comparison between the three qualitative methods showed that issues revealed by experts only slightly overlapped with those expressed by employees. In addition, it was seen that the type of evaluation yielded different results. Conclusions Findings from this study provide the following recommendations for organizations that are planning to provide mHealth apps to their workers and for developers of mHealth apps: (1) system performance influences adoption and adherence, (2) relevancy and benefits of the mHealth app should be clear to the user and should address users’ characteristics, (3) app should take into account the work context, and (4) employees should be alerted to their right to privacy and use of personal data. Furthermore, a qualitative evaluation of mHealth apps in a work setting might benefit from combining more than one method. Factors to consider when selecting a qualitative research method are the design, development stage, and implementation of the app; the working context in which it is being used; employees’ mental models; practicability; resources; and skills required of experts and users. PMID:29592846
Evaluating an mHealth App for Health and Well-Being at Work: Mixed-Method Qualitative Study.
de Korte, Elsbeth Marieke; Wiezer, Noortje; Janssen, Joris H; Vink, Peter; Kraaij, Wessel
2018-03-28
To improve workers' health and well-being, workplace interventions have been developed, but utilization and reach are unsatisfactory, and effects are small. In recent years, new approaches such as mobile health (mHealth) apps are being developed, but the evidence base is poor. Research is needed to examine its potential and to assess when, where, and for whom mHealth is efficacious in the occupational setting. To develop interventions for workers that actually will be adopted, insight into user satisfaction and technology acceptance is necessary. For this purpose, various qualitative evaluation methods are available. The objectives of this study were to gain insight into (1) the opinions and experiences of employees and experts on drivers and barriers using an mHealth app in the working context and (2) the added value of three different qualitative methods that are available to evaluate mHealth apps in a working context: interviews with employees, focus groups with employees, and a focus group with experts. Employees of a high-tech company and experts were asked to use an mHealth app for at least 3 weeks before participating in a qualitative evaluation. Twenty-two employees participated in interviews, 15 employees participated in three focus groups, and 6 experts participated in one focus group. Two researchers independently coded, categorized, and analyzed all quotes yielded from these evaluation methods with a codebook using constructs from user satisfaction and technology acceptance theories. Interviewing employees yielded 785 quotes, focus groups with employees yielded 266 quotes, and the focus group with experts yielded 132 quotes. Overall, participants muted enthusiasm about the app. Combined results from the three evaluation methods showed drivers and barriers for technology, user characteristics, context, privacy, and autonomy. A comparison between the three qualitative methods showed that issues revealed by experts only slightly overlapped with those expressed by employees. In addition, it was seen that the type of evaluation yielded different results. Findings from this study provide the following recommendations for organizations that are planning to provide mHealth apps to their workers and for developers of mHealth apps: (1) system performance influences adoption and adherence, (2) relevancy and benefits of the mHealth app should be clear to the user and should address users' characteristics, (3) app should take into account the work context, and (4) employees should be alerted to their right to privacy and use of personal data. Furthermore, a qualitative evaluation of mHealth apps in a work setting might benefit from combining more than one method. Factors to consider when selecting a qualitative research method are the design, development stage, and implementation of the app; the working context in which it is being used; employees' mental models; practicability; resources; and skills required of experts and users. ©Elsbeth Marieke de Korte, Noortje Wiezer, Joris H Janssen, Peter Vink, Wessel Kraaij. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 28.03.2018.
Sak, Gabriele; Diviani, Nicola; Allam, Ahmed; Schulz, Peter J
2016-01-15
The exponential increase in health-related online platforms has made the Internet one of the main sources of health information globally. The quality of health contents disseminated on the Internet has been a central focus for many researchers. To date, however, few comparative content analyses of pro- and anti-vaccination websites have been conducted, and none of them compared the quality of information. The main objective of this study was therefore to bring new evidence on this aspect by comparing the quality of pro- and anti-vaccination online sources. Based on past literature and health information quality evaluation initiatives, a 40-categories assessment tool (Online Vaccination Information Quality Codebook) was developed and used to code a sample of 1093 webpages retrieved via Google and two filtered versions of the same search engine. The categories investigated were grouped into four main quality dimensions: web-related design quality criteria (10 categories), health-specific design quality criteria (3 categories), health related content attributes (12 categories) and vaccination-specific content attributes (15 categories). Data analysis comprised frequency counts, cross tabulations, Pearson's chi-square, and other inferential indicators. The final sample included 514 webpages in favor of vaccination, 471 against, and 108 neutral. Generally, webpages holding a favorable view toward vaccination presented more quality indicators compared to both neutral and anti-vaccination pages. However, some notable exceptions to this rule were observed. In particular, no differences were found between pro- and anti-vaccination webpages as regards vaccination-specific content attributes. Our analyses showed that the overall quality of pro-vaccination webpages is superior to anti-vaccination online sources. The developed coding scheme was proven to be a helpful and reliable tool to judge the quality of vaccination-related webpages. Based on the results, we advance recommendations for online health information providers as well as directions for future research in this field.
Lebeau, Jean-Pierre; Cadwallader, Jean-Sébastien; Vaillant-Roussel, Hélène; Pouchain, Denis; Yaouanc, Virginie; Aubin-Auger, Isabelle; Mercier, Alain; Rusch, Emmanuel; Remmen, Roy; Vermeire, Etienne; Hendrickx, Kristin
2016-01-01
Objective To construct a typology of general practitioners’ (GPs) responses regarding their justification of therapeutic inertia in cardiovascular primary prevention for high-risk patients with hypertension. Design Empirically grounded construction of typology. Types were defined by attributes derived from the qualitative analysis of GPs’ reported reasons for inaction. Participants 256 GPs randomised in the intervention group of a cluster randomised controlled trial. Setting GPs members of 23 French Regional Colleges of Teachers in General Practice, included in the EffectS of a multifaceted intervention on CArdiovascular risk factors in high-risk hyPErtensive patients (ESCAPE) trial. Data collection and analysis The database consisted of 2638 written responses given by the GPs to an open-ended question asking for the reasons why drug treatment was not changed as suggested by the national guidelines. All answers were coded using constant comparison analysis. A matrix analysis of codes per GP allowed the construction of a response typology, where types were defined by codes as attributes. Initial coding and definition of types were performed independently by two teams. Results Initial coding resulted in a list of 69 codes in the final codebook, representing 4764 coded references in the question responses. A typology including seven types was constructed. 100 GPs were allocated to one and only one of these types, while 25 GPs did not provide enough data to allow classification. Types (numbers of GPs allocated) were: ‘optimists’ (28), ‘negotiators’ (20), ‘checkers’ (15), ‘contextualisers’ (13), ‘cautious’ (11), ‘rounders’ (8) and ‘scientists’ (5). For the 36 GPs that provided 50 or more coded references, analysis of the code evolution over time and across patients showed a consistent belonging to the initial type for any given GP. Conclusion This typology could provide GPs with some insight into their general ways of considering changes in the treatment/management of cardiovascular risk factors and guide design of specific physician-centred interventions to reduce inappropriate inaction. Trial registration number NCT00348855. PMID:27178974
Alexander, Dayna S.; Schleiden, Loren J.; Carpenter, Delesha M.
2017-01-01
OBJECTIVES This study aimed to describe the barriers and facilitators that influence community pharmacists' ability to provide medication counseling to pediatric patients. METHODS Semistructured interviews (n = 16) were conducted with pharmacy staff at 3 community pharmacies in 2 Eastern states. The interview guide elicited pharmacy staff experiences interacting with children and their perceived barriers and facilitators to providing medication counseling. Transcripts were reviewed for accuracy and a codebook was developed for data analysis. NVivo 10 was used for content analysis and identifying relevant themes. RESULTS Ten pharmacists and 6 pharmacy technicians were interviewed. Most participants were female (69%), aged 30 to 49 years (56%), with ≥5 years of pharmacy practice experience. Eight themes emerged as barriers to pharmacists' engaging children in medication counseling, the most prevalent being the child's absence during medication pickup, the child appearing to be distracted or uninterested, and having an unconducive pharmacy environment. Pharmacy staff noted 7 common facilitators to engaging children, most importantly, availability of demonstrative and interactive devices/technology, pharmacist demeanor and communication approach, and having child-friendly educational materials. CONCLUSIONS Findings suggest that pharmacy personnel are rarely able to engage children in medication counseling because of the patient's absence during medication pickup; however, having child-friendly materials could facilitate interactions when the child is present. These findings can inform programs and interventions aimed at addressing the barriers pharmacists encounter while educating children about safe and appropriate use of medicines. PMID:29290741
Large deformation image classification using generalized locality-constrained linear coding.
Zhang, Pei; Wee, Chong-Yaw; Niethammer, Marc; Shen, Dinggang; Yap, Pew-Thian
2013-01-01
Magnetic resonance (MR) imaging has been demonstrated to be very useful for clinical diagnosis of Alzheimer's disease (AD). A common approach to using MR images for AD detection is to spatially normalize the images by non-rigid image registration, and then perform statistical analysis on the resulting deformation fields. Due to the high nonlinearity of the deformation field, recent studies suggest to use initial momentum instead as it lies in a linear space and fully encodes the deformation field. In this paper we explore the use of initial momentum for image classification by focusing on the problem of AD detection. Experiments on the public ADNI dataset show that the initial momentum, together with a simple sparse coding technique-locality-constrained linear coding (LLC)--can achieve a classification accuracy that is comparable to or even better than the state of the art. We also show that the performance of LLC can be greatly improved by introducing proper weights to the codebook.
What Are Cancer Centers Advertising to the Public? A Content Analysis
Vater, Laura B.; Donohue, Julie M.; Arnold, Robert; White, Douglas B; Chu, Edward; Schenker, Yael
2015-01-01
Background Although critics have expressed concerns about cancer center advertising, the content of these advertisements has not been analyzed. Objective To characterize the informational and emotional content of cancer center advertisements. Design Systematic analysis of all cancer center advertisements in top U.S. consumer magazines (N=269) and television networks (N=44) in 2012. Measurements Using a standardized codebook, we assessed (1) types of clinical services promoted; (2) information provided about clinical services, including risks, benefits, and costs; (3) use of emotional advertising appeals; and (4) use of patient testimonials. Two investigators independently coded advertisements using ATLAS.ti. Kappa values ranged from 0.77 to 1.0. Results A total of 102 cancer centers placed 409 unique clinical advertisements in top media markets in 2012. Advertisements promoted treatments (88%) more often than screening (18%) or supportive services (13%; p<0.001). Benefits of advertised therapies were described more often than risks (27% vs. 2%; p<0.001) but rarely quantified (2%). Few advertisements mentioned insurance coverage or costs (5%). Emotional appeals were frequent (85%), most often evoking hope for survival (61%), describing cancer treatment as a fight or battle (41%), and evoking fear (30%). Nearly half of advertisements included patient testimonials, usually focused on survival or cure. Testimonials rarely included disclaimers (15%) and never described the results a typical patient might expect. Limitations Internet advertisements were not included. Conclusions Clinical advertisements by cancer centers frequently promote cancer therapy using emotional appeals that evoke hope and fear while rarely providing information about risks, benefits, or costs. Further work is needed to understand how these advertisements influence patient understanding and expectations of benefit from cancer treatments. PMID:24863081
Thrasher, Ashley B; Walker, Stacy E; Hankemeier, Dorice A; Pitney, William A
2015-03-01
Many newly credentialed athletic trainers gain initial employment as graduate assistants (GAs) in the collegiate setting, yet their socialization into their role is unknown. Exploring the socialization process of GAs in the collegiate setting could provide insight into how that process occurs. To explore the professional socialization of GAs in the collegiate setting to determine how GAs are socialized and developed as athletic trainers. Qualitative study. Individual phone interviews. Athletic trainers (N = 21) who had supervised GAs in the collegiate setting for a minimum of 8 years (16 men [76%], 5 women [24%]; years of supervision experience = 14.6 ± 6.6). Data were collected via phone interviews, which were recorded and transcribed verbatim. Data were analyzed by a 4-person consensus team with a consensual qualitative-research design. The team independently coded the data and compared ideas until a consensus was reached, and a codebook was created. Trustworthiness was established through member checks and multianalyst triangulation. Four themes emerged: (1) role orientation, (2) professional development and support, (3) role expectations, and (4) success. Role orientation occurred both formally (eg, review of policies and procedures) and informally (eg, immediate role immersion). Professional development and support consisted of the supervisor mentoring and intervening when appropriate. Role expectations included decision-making ability, independent practice, and professionalism; however, supervisors often expected GAs to function as experienced, full-time staff. Success of the GAs depended on their adaptability and on the proper selection of GAs by supervisors. Supervisors socialize GAs into the collegiate setting by providing orientation, professional development, mentoring, and intervention when necessary. Supervisors are encouraged to use these socialization tactics to enhance the professional development of GAs in the collegiate setting.
A descriptive study of effect-size reporting in research reviews.
Floyd, Judith A
2017-06-01
To describe effect-size reporting in research reviews completed in support of evidence-based practice in nursing. Many research reviews report nurses' critical appraisal of level, quality and overall strength of evidence available to address clinical questions. Several studies of research-review quality suggest effect-size information would be useful to include in these reviews, but none focused on reviewers' attention to effect sizes. Descriptive. One hundred and four reviews indexed in CINAHL as systematic reviews and published from July 2012-February 2014 were examined. Papers were required to be peer-reviewed, written in English, contain an abstract and have at least one nurse author. Reviews were excluded if they did not use critical appraisal methods to address evidence of correlation, prediction or effectiveness. Data from remaining papers (N = 73) were extracted by three or more independent coders using a structured coding form and detailed codebook. Data were stored, viewed and analysed using Microsoft Office Excel ® spreadsheet functions. Sixteen percent (n = 12) of the sample contained effect-size information. Of the 12, six included all the effect-size information recommended by APA guidelines. Independent of completeness of reporting, seven contained discussion of effect sizes in the paper, but none included effect-size information in abstracts. Research reviews available to practicing nurses often fail to include information needed to accurately assess how much improvement may result from implementation of evidence-based policies, programs, protocols or practices. Manuscript reviewers are urged to hold authors to APA standards for reporting/discussing effect-size information in both primary research reports and research reviews. © 2016 John Wiley & Sons Ltd.
Lone Actor Terrorist Attack Planning and Preparation: A Data-Driven Analysis.
Schuurman, Bart; Bakker, Edwin; Gill, Paul; Bouhana, Noémie
2017-10-23
This article provides an in-depth assessment of lone actor terrorists' attack planning and preparation. A codebook of 198 variables related to different aspects of pre-attack behavior is applied to a sample of 55 lone actor terrorists. Data were drawn from open-source materials and complemented where possible with primary sources. Most lone actors are not highly lethal or surreptitious attackers. They are generally poor at maintaining operational security, leak their motivations and capabilities in numerous ways, and generally do so months and even years before an attack. Moreover, the "loneness" thought to define this type of terrorism is generally absent; most lone actors uphold social ties that are crucial to their adoption and maintenance of the motivation and capability to commit terrorist violence. The results offer concrete input for those working to detect and prevent this form of terrorism and argue for a re-evaluation of the "lone actor" concept. © 2017 The Authors. Journal of Forensic Sciences published by Wiley Periodicals, Inc. on behalf of American Academy of Forensic Sciences.
NASA Technical Reports Server (NTRS)
Kondoz, A. M.; Evans, B. G.
1993-01-01
In the last decade, low bit rate speech coding research has received much attention resulting in newly developed, good quality, speech coders operating at as low as 4.8 Kb/s. Although speech quality at around 8 Kb/s is acceptable for a wide variety of applications, at 4.8 Kb/s more improvements in quality are necessary to make it acceptable to the majority of applications and users. In addition to the required low bit rate with acceptable speech quality, other facilities such as integrated digital echo cancellation and voice activity detection are now becoming necessary to provide a cost effective and compact solution. In this paper we describe a CELP speech coder with integrated echo canceller and a voice activity detector all of which have been implemented on a single DSP32C with 32 KBytes of SRAM. The quality of CELP coded speech has been improved significantly by a new codebook implementation which also simplifies the encoder/decoder complexity making room for the integration of a 64-tap echo canceller together with a voice activity detector.
McGuirt, Jared T.; Ward, Rachel; Elliott, Nadya Majette; Bullock, Sally Lawrence; Jilcott Pitts, Stephanie B.
2014-01-01
Little is known about the barriers and facilitators to local food procurement among women of reproductive age (WRA). Therefore we conducted qualitative interviews with WRA in rural eastern and western NC (ENC and WNC) to learn of factors related to locally sourced food procurement. In-depth interviews were conducted among low-income White, Black, and Hispanic English-speaking WRA (N=62 (ENC: 37; WNC: 23) (18–44 years)). Independent coders used a consensus codebook to double-code all transcripts. Coders then came together to discuss and resolve coding discrepancies, and identified themes and salient quotes. Cross-cutting themes from both ENC and WNC participants included access to local food sources; acceptance of Supplemental Nutrition Assistance Program/Electronic Benefit Transfer (SNAP/EBT); freshness of produce; support for local agriculture; and the community aspect of local food sourcing. The in-depth understanding gained from this study could be used to guide tailored policy and intervention efforts aimed at promoting fruit and vegetable consumption among low-income WRA. PMID:25664198
Owen-Smith, Ashli A; Woodyatt, Cory; Sineath, R Craig; Hunkeler, Enid M; Barnwell, La Tasha; Graham, Ashley; Stephenson, Rob; Goodman, Michael
2016-01-01
Purpose: Although transgender people may be at increased risk for a range of health problems, they have been the subject of relatively little health research. An important step toward expanding the evidence base is to understand and address the reasons for nonparticipation and dropout. The aim of this study was to explore the perceptions of barriers to and facilitators of participation in health research among a sample of transgender people in San Francisco, CA, and Atlanta, GA. Methods: Twelve in-person focus groups (FGs) were conducted; six (three with transwomen, three with transmen) were conducted in San Francisco and six FGs were conducted in Atlanta (three with transwomen and three with transmen). FGs were audiorecorded, transcribed, and uploaded to MaxQDA software for analysis. A codebook was used to code transcripts; new codes were added iteratively as they arose. All transcripts were coded by at least 2 of the 4 researchers and, after each transcript was coded, the researchers met to discuss any discrepancies, which were resolved by consensus. Results: Among 67 FG participants, 37 (55%) identified as transmen and 30 (45%) identified as transwomen. The average age of participants was ∼41 years (range 18-67) and the majority (61%) were non-Hispanic Whites. Several barriers that can hinder participation in health research were identified, including logistical concerns, issues related to mistrust, a lack of awareness about participation opportunities, and psychosocial/emotional concerns related to being "outed." A broad range of facilitators were also identified, including the opportunity to gain knowledge, access medical services, and contribute to the transgender community. Conclusion: These findings provide insights about the perceived barriers to and facilitators of research participation and offer some guidance for researchers in our ongoing effort to engage the transgender community in health research.
Locatelli, Sara M; Turcios, Stephanie; LaVela, Sherri L
2015-01-01
To examine providers' perspectives on the care environment and patient-centered care (PCC) through the eyes of the veteran patient, using guided tours qualitative methodology. Environmental factors, such as attractiveness and function, have the potential to improve patients' experiences. Participatory qualitative methods allow researchers to explore the environment and facilitate discussion. Guided tours were conducted with 25 health care providers/employees at two Veterans Affairs (VA) health care facilities. In guided tours, participants lead the researcher through an environment, commenting on their surroundings, thoughts, and feelings. The researcher walks along with the participant, asking open-ended questions as needed to foster discussion and gain an understanding of the participant's view. Participants were asked to walk through the facility as though they were a veteran. Tours were audio recorded, with participant permission, and transcribed verbatim by research assistants. Three qualitative researchers were responsible for codebook development and coding transcripts and used data-driven coding approaches. Participants discussed physical appearance of the environment and how that influences perceptions about care. Overall, participants highlighted the need to shed the "institutional" appearance. Differences between VA and non-VA health care facilities were discussed, including availability of private rooms and staff to assist with navigating the facility. They reviewed resources in the facility, such as the information desk to assist patients and families. Finally, they offered suggestions for future improvements, including improvements to waiting areas and quiet areas for patients to relax and "get away" from their rooms. Participants highlighted many small changes to the care environment that could enhance the patient experience. Additionally, they examined the environment from the patient's perspective, to identify elements that enhance, or detract from, the patient's care experience. © The Author(s) 2015.
Tajeu, Gabriel S; Cherrington, Andrea L; Andreae, Lynn; Prince, Candice; Holt, Cheryl L; Halanych, Jewell H
2015-10-01
We qualitatively assessed patients' perceptions of discrimination and patient satisfaction in the health care setting specific to interactions with nonphysician health care staff. We conducted 12 focus-group interviews with African American and European American participants, stratified by race and gender, from June to November 2008. We used a topic guide to facilitate discussion and identify factors contributing to perceived discrimination and analyzed transcripts for relevant themes using a codebook. We enrolled 92 participants: 55 African Americans and 37 European Americans, all of whom reported perceived discrimination and lower patient satisfaction as a result of interactions with nonphysician health care staff. Perceived discrimination was associated with 2 main characteristics: insurance or socioeconomic status and race. Both verbal and nonverbal communication style on the part of nonphysician health care staff were related to individuals' perceptions of how they were treated. The behaviors of nonphysician health care staff in the clinical setting can potentially contribute to patients' perceptions of discrimination and lowered patient satisfaction. Future interventions to reduce health care discrimination should include a focus on staff cultural competence and customer service skills.
Power Consumption and Calculation Requirement Analysis of AES for WSN IoT.
Hung, Chung-Wen; Hsu, Wen-Ting
2018-05-23
Because of the ubiquity of Internet of Things (IoT) devices, the power consumption and security of IoT systems have become very important issues. Advanced Encryption Standard (AES) is a block cipher algorithm is commonly used in IoT devices. In this paper, the power consumption and cryptographic calculation requirement for different payload lengths and AES encryption types are analyzed. These types include software-based AES-CB, hardware-based AES-ECB (Electronic Codebook Mode), and hardware-based AES-CCM (Counter with CBC-MAC Mode). The calculation requirement and power consumption for these AES encryption types are measured on the Texas Instruments LAUNCHXL-CC1310 platform. The experimental results show that the hardware-based AES performs better than the software-based AES in terms of power consumption and calculation cycle requirements. In addition, in terms of AES mode selection, the AES-CCM-MIC64 mode may be a better choice if the IoT device is considering security, encryption calculation requirement, and low power consumption at the same time. However, if the IoT device is pursuing lower power and the payload length is generally less than 16 bytes, then AES-ECB could be considered.
Two applications of time reversal mirrors: seismic radio and seismic radar.
Hanafy, Sherif M; Schuster, Gerard T
2011-10-01
Two seismic applications of time reversal mirrors (TRMs) are introduced and tested with field experiments. The first one is sending, receiving, and decoding coded messages similar to a radio except seismic waves are used. The second one is, similar to radar surveillance, detecting and tracking a moving object(s) in a remote area, including the determination of the objects speed of movement. Both applications require the prior recording of calibration Green's functions in the area of interest. This reference Green's function will be used as a codebook to decrypt the coded message in the first application and as a moving sensor for the second application. Field tests show that seismic radar can detect the moving coordinates (x(t), y(t), z(t)) of a person running through a calibration site. This information also allows for a calculation of his velocity as a function of location. Results with the seismic radio are successful in seismically detecting and decoding coded pulses produced by a hammer. Both seismic radio and radar are highly robust to signals in high noise environments due to the super-stacking property of TRMs. © 2011 Acoustical Society of America
NASA Technical Reports Server (NTRS)
Tilton, James C.; Ramapriyan, H. K.
1989-01-01
A case study is presented where an image segmentation based compression technique is applied to LANDSAT Thematic Mapper (TM) and Nimbus-7 Coastal Zone Color Scanner (CZCS) data. The compression technique, called Spatially Constrained Clustering (SCC), can be regarded as an adaptive vector quantization approach. The SCC can be applied to either single or multiple spectral bands of image data. The segmented image resulting from SCC is encoded in small rectangular blocks, with the codebook varying from block to block. Lossless compression potential (LDP) of sample TM and CZCS images are evaluated. For the TM test image, the LCP is 2.79. For the CZCS test image the LCP is 1.89, even though when only a cloud-free section of the image is considered the LCP increases to 3.48. Examples of compressed images are shown at several compression ratios ranging from 4 to 15. In the case of TM data, the compressed data are classified using the Bayes' classifier. The results show an improvement in the similarity between the classification results and ground truth when compressed data are used, thus showing that compression is, in fact, a useful first step in the analysis.
The Violent Content in Attenuated Psychotic Symptoms.
Marshall, Catherine; Deighton, Stephanie; Cadenhead, Kristin S; Cannon, Tyrone D; Cornblatt, Barbara A; McGlashan, Thomas H; Perkins, Diana O; Seidman, Larry J; Tsuang, Ming T; Walker, Elaine F; Woods, Scott W; Bearden, Carrie E; Mathalon, Daniel; Addington, Jean
2016-08-30
The relationship between psychosis and violence has typically focused on factors likely to predict who will commit violent acts. One unexplored area is violence in the content of subthreshold positive symptoms. The current aim was to conduct an exploratory analysis of violent content in the attenuated psychotic symptoms (APS) of those at clinical high risk of psychosis (CHR) who met criteria for attenuated psychotic symptom syndrome (APSS). The APS of 442 CHR individuals, determined by the Structured Interview for Prodromal Syndromes, were described in comprehensive vignettes. The content of these symptoms were coded using the Content of Attenuated Positive Symptoms Codebook. Other measures included clinical symptoms, functioning, beliefs and trauma. Individuals with violent content had significantly higher APS, greater negative beliefs about the self and others, and increased bullying. The same findings and higher ratings on anxiety symptoms were present when participants with self-directed violence were compared to participants with no violent content. Individuals reporting violent content differ in their clinical presentation compared to those who do not experience violent content. Adverse life events, like bullying, may impact the presence of violent content in APS symptoms. Future studies should explore violent content in relation to actual behavior. Copyright © 2016. Published by Elsevier Ireland Ltd.
Planning/scheduling techniques for VQ-based image compression
NASA Technical Reports Server (NTRS)
Short, Nicholas M., Jr.; Manohar, Mareboyana; Tilton, James C.
1994-01-01
The enormous size of the data holding and the complexity of the information system resulting from the EOS system pose several challenges to computer scientists, one of which is data archival and dissemination. More than ninety percent of the data holdings of NASA is in the form of images which will be accessed by users across the computer networks. Accessing the image data in its full resolution creates data traffic problems. Image browsing using a lossy compression reduces this data traffic, as well as storage by factor of 30-40. Of the several image compression techniques, VQ is most appropriate for this application since the decompression of the VQ compressed images is a table lookup process which makes minimal additional demands on the user's computational resources. Lossy compression of image data needs expert level knowledge in general and is not straightforward to use. This is especially true in the case of VQ. It involves the selection of appropriate codebooks for a given data set and vector dimensions for each compression ratio, etc. A planning and scheduling system is described for using the VQ compression technique in the data access and ingest of raw satellite data.
One Shot Detection with Laplacian Object and Fast Matrix Cosine Similarity.
Biswas, Sujoy Kumar; Milanfar, Peyman
2016-03-01
One shot, generic object detection involves searching for a single query object in a larger target image. Relevant approaches have benefited from features that typically model the local similarity patterns. In this paper, we combine local similarity (encoded by local descriptors) with a global context (i.e., a graph structure) of pairwise affinities among the local descriptors, embedding the query descriptors into a low dimensional but discriminatory subspace. Unlike principal components that preserve global structure of feature space, we actually seek a linear approximation to the Laplacian eigenmap that permits us a locality preserving embedding of high dimensional region descriptors. Our second contribution is an accelerated but exact computation of matrix cosine similarity as the decision rule for detection, obviating the computationally expensive sliding window search. We leverage the power of Fourier transform combined with integral image to achieve superior runtime efficiency that allows us to test multiple hypotheses (for pose estimation) within a reasonably short time. Our approach to one shot detection is training-free, and experiments on the standard data sets confirm the efficacy of our model. Besides, low computation cost of the proposed (codebook-free) object detector facilitates rather straightforward query detection in large data sets including movie videos.
Calcium, Vitamin D, Iron, and Folate Messages in Three Canadian Magazines.
Cooper, Marcia; Zalot, Lindsay; Wadsworth, Laurie A
2014-12-01
Data from the Canadian Community Health Survey showed that calcium, vitamin D, iron, and folate are nutrients of concern for females 19-50 years of age. The study objectives were to assess the quantity, format, and accuracy of messages related to these nutrients in selected Canadian magazines and to examine their congruency with Canadian nutrition policies. Using content analysis methodology, messages were coded using a stratified sample of a constructed year for Canadian Living, Chatelaine, and Homemakers magazines (n = 33) from 2003-2008. Pilot research was conducted to assess inter-coder agreement and to develop the study coding sheet and codebook. The messages identified (n = 595) averaged 18 messages per magazine issue. The most messages were found for calcium, followed by folate, iron, and vitamin D, and the messages were found primarily in articles (46%) and advertisements (37%). Overall, most messages were coded as accurate (82%) and congruent with Canadian nutrition policies (90%). This research demonstrated that the majority of messages in 3 Canadian magazines between 2003 and 2008 were accurate and reflected Canadian nutrition policies. Because Canadian women continue to receive much nutrition information via print media, this research provides important insights for dietitians into media messaging.
Ong, Keh Kiong; Ting, Kit Cheng; Chow, Yeow Leng
2018-01-01
To understand the perceptions of critical care nurses towards providing end-of-life care. There has been an increasing interest in end-of-life care in the critical care setting. In Singapore, approximately half of deaths in the hospital occur during critical care. While nurses are well positioned to provide end-of-life care to patients and their family members, they faced barriers to providing end-of-life care. Also, providing end-of-life care has profound positive and negative psychological effects on nurses, with the latter being more prominent. Qualitative descriptive design. Data collection was performed in a medical intensive care unit of a public tertiary hospital in Singapore. Ten registered nurses were purposively sampled and interviewed individually using a semi-structured interview guide. A codebook was developed to guide coding, and data were thematically analysed. Rigour was maintained. Nurses went through a trajectory of experience. They experienced the culture of care and developed dissatisfaction with it. The tension shaped their perception and meaning of life and death, and they developed mechanisms to reach resolution. This study provides insight on nurses' perception as a trajectory of experience and raised several implications on clinical practice, policy and research. There is a need to alleviate the tension nurses face and to facilitate coming to terms with the tension by improving the culture of care and supporting nurses. Nurses could be involved more in decision-making and empowered to start end-of-life care conversations within the team and with family members. Communication with family members and between nurses and doctors could be improved. Support for nurses providing end-of-life care could be enhanced through promoting social networks, education and bereavement support. Further research is needed to explore ways to support and empower nurses to provide end-of-life care in critical care. © 2017 John Wiley & Sons Ltd.
Korner, Eli J; Morris, Anne; Allen, Isabel Elaine; Hurvitz, Sara; Beattie, Mary S; Kalesan, Bindu
2015-10-01
Human epidermal growth factor receptor 2 (HER2)-positive metastatic breast cancer (MBC) is an aggressive form of breast cancer and is historically associated with poor outcomes compared with HER2-negative MBC. Since 1998, four drugs have been globally approved for the targeted treatment of HER2-positive MBC. Additional advances in patient care-such as improved breast cancer screening, HER2 testing, and supportive care-have also occurred. The objective of this systematic review and meta-analysis is to determine whether there has been a cumulative change in survival over time in patients with HER2-positive advanced breast cancer based on results from interventional clinical trials (ICTs) and observational studies and to compare outcomes across these types of studies. A systematic search of Medline, EMBASE, and the Cochrane Central Register of Controlled Trials will be performed. Two investigators will independently assess each abstract for inclusion. English language reports of ICTs and observational studies that include patients with HER2-positive advanced breast cancer from 1987 onwards will be considered. The primary outcome of interest is overall survival; secondary outcomes include progression-free survival and safety. Data on clinical outcomes, as well as on study design, study population, treatment/intervention, methodological quality, and outcomes, will be extracted using a structured codebook developed by the authors for this study. Standard and cumulative random effects meta-analysis will be performed to derive pooled risk estimates, both overall and by study design, controlling for covariates such as aggregate demographic and clinical characteristics of patients, treatment/intervention, and study characteristics. Heterogeneity of studies will be evaluated using the I(2) statistic. Differences in risk estimates by quality characteristics will be performed using meta-regression. This study will evaluate current and evolving trends in survival associated with HER2-positive advanced breast cancer over nearly 30 years and will build upon prior, less comprehensive, systematic analyses. This information is important to patients, healthcare providers, and researchers, particularly in the advanced disease setting, in which new therapies have been recently approved. Including observational studies allows us to evaluate real-world effectiveness; useful information will be gained by comparing findings from observational studies with those from ICTs. PROSPERO CRD42014014345.
Orthodontic informed consent considering information load and serial position effect.
Pawlak, Caroline E; Fields, Henry W; Beck, F Michael; Firestone, Allen R
2015-03-01
Previous research has demonstrated that current methods of informed consent are relatively ineffective as shown by poor recall and comprehension by adolescent patients and their parents. The purpose of this study was to determine whether adding a short videotape presentation reiterating the issues related to informed consent to a modified informed consent document that emphasizes a limited number of core and patient-specific custom "chunks" at the beginning of an informed consent presentation improved the recall and comprehension of the risks, benefits, and alternatives of orthodontic treatment. A second objective was to evaluate the current related data for recommendable practices. Seventy patient-parent pairs were randomly divided into 2 groups. The intervention group (group A) patients and parents together reviewed a customized slide show and a short videotape presentation describing the key risks of orthodontic treatment. Group B followed the same protocol without viewing the videotape. All patients and parents were interviewed independently by research assistants using an established measurement tool with open-ended questions. Interviews were transcribed and scored for the appropriateness of responses using a previously established codebook. Lastly, the patients and parents were given 2 reading literacy tests, 1 related to health and 1 with general content followed by the self-administered demographic and psychological state questionnaires. There were no significant differences between the groups for sociodemographic variables. There were no significant differences between the groups for overall recall and comprehension; recall and comprehension for the domains of treatment, risk, and responsibility; and recall and comprehension for core, general, and custom items. The positional effects were limited in impact. When compared with previous studies, these data further demonstrate the benefit of improved readability and audiovisual supplementation with the addition of "chunking." There is no benefit to adding a short video to the previously established improved readability and audiovisual supplementation. There is a significant benefit of improved readability and audiovisual slide supplementation with the addition of "chunking" over traditional informed consent methods in terms of patient improvement in overall comprehension, treatment recall, and treatment comprehension. The treatment domain is the most affected. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Using a cVEP-Based Brain-Computer Interface to Control a Virtual Agent.
Riechmann, Hannes; Finke, Andrea; Ritter, Helge
2016-06-01
Brain-computer interfaces provide a means for controlling a device by brain activity alone. One major drawback of noninvasive BCIs is their low information transfer rate, obstructing a wider deployment outside the lab. BCIs based on codebook visually evoked potentials (cVEP) outperform all other state-of-the-art systems in that regard. Previous work investigated cVEPs for spelling applications. We present the first cVEP-based BCI for use in real-world settings to accomplish everyday tasks such as navigation or action selection. To this end, we developed and evaluated a cVEP-based on-line BCI that controls a virtual agent in a simulated, but realistic, 3-D kitchen scenario. We show that cVEPs can be reliably triggered with stimuli in less restricted presentation schemes, such as on dynamic, changing backgrounds. We introduce a novel, dynamic repetition algorithm that allows for optimizing the balance between accuracy and speed individually for each user. Using these novel mechanisms in a 12-command cVEP-BCI in the 3-D simulation results in ITRs of 50 bits/min on average and 68 bits/min maximum. Thus, this work supports the notion of cVEP-BCIs as a particular fast and robust approach suitable for real-world use.
White, Jaclyn M; Dunham, Emilia; Rowley, Blake; Reisner, Sari L; Mimiaga, Matthew J
2015-01-01
Sexually explicit media may perpetuate racial and sexual norms among men who have sex with men. While men may be exposed to sexually explicit media in the online settings where they seek sex with other men, no studies to our knowledge have explored the relationship between the racial and sexual content of advertisements appearing in these spaces. In 2011, using a detailed codebook, 217 sexually explicit advertisements on a male sex-seeking website were coded for themes, actor characteristics and sexual acts depicted. Multivariable logistic regression models examined the association between skin colour, theme, sexual acts and condomless sex acts. Nearly half (45%) featured a 'thug' theme (a style emphasising Black masculinity/hip-hop culture), 21% featured a college theme and 44% featured condomless sex. Advertisements featuring only Black men, advertisements featuring Black men with men of other skin tones and advertisements depicting a thug theme were positively associated with depictions of condomless sex. Online sexually explicit advertisements featuring Black themes and actors more frequently depicted condomless sex than advertisements with White men alone. Future research should examine whether depictions of Black men engaging in condomless sex in online advertisements influence the sexual norms and cognitions of Black men who have sex with men and their partners.
Levis, Denise M; Westbrook, Kyresa
2013-01-01
Many health organizations and practitioners in the United States promote preconception health (PCH) to consumers. However, summaries and evaluations of PCH promotional activities are limited. We conducted a content analysis of PCH health education materials collected from local-, state-, national-, and federal-level partners by using an existing database of partners, outreach to maternal and child health organizations, and a snowball sampling technique. Not applicable. Not applicable. Thirty-two materials were included for analysis, based on inclusion/exclusion criteria. A codebook guided coding of materials' characteristics (type, authorship, language, cost), use of marketing and behavioral strategies to reach the target population (target audience, message framing, call to action), and inclusion of PCH subject matter (clinical-behavioral components). The self-assessment of PCH behaviors was the most common material (28%) to appear in the sample. Most materials broadly targeted women, and there was a near-equal distribution in targeting by pregnancy planning status segments (planners and nonplanners). "Practicing PCH benefits the baby's health" was the most common message frame used. Materials contained a wide range of clinical-behavioral components. Strategic targeting of subgroups of consumers is an important but overlooked strategy. More research is needed around PCH components, in terms of packaging and increasing motivation, which could guide use and placement of clinical-behavioral components within promotional materials.
Guilamo-Ramos, Vincent; Lee, Jane J; Ruiz, Yumary; Hagan, Holly; Delva, Marlyn; Quiñones, Zahira; Kamler, Alexandra; Robles, Gabriel
2015-01-01
While the Caribbean has the second highest global human immunodeficiency virus (HIV) prevalence, insufficient attention has been paid to contributing factors of the region's elevated risk. Largely neglected is the potential role of drugs in shaping the Caribbean HIV/acquired immune deficiency syndrome epidemic. Caribbean studies have almost exclusively focused on drug transportation and seldom acknowledged local user economies and drug-related health and social welfare consequences. While tourism is consistently implicated within the Caribbean HIV epidemic, less is known about the intersection of drugs and tourism. Tourism areas represent distinct ecologies of risk often characterised by sex work, alcohol consumption and population mixing between lower and higher risk groups. Limited understanding of availability and usage of drugs in countries such as the Dominican Republic (DR), the Caribbean country with the greatest tourist rates, presents barriers to HIV prevention. This study addresses this gap by conducting in-depth interviews with 30 drug users in Sosúa, a major sex tourism destination of the DR. A two-step qualitative data analysis process was utilised and interview transcripts were systematically coded using a well-defined thematic codebook. Results suggest three themes: (1) local demand shifts drug routes to tourism areas, (2) drugs shape local economies and (3) drug use facilitates HIV risk behaviours in tourism areas.
Adams, Kristen; Cimino, Jenica E W; Arnold, Robert M; Anderson, Wendy G
2012-10-01
To describe hospital-based physicians' responses to patients' verbal expressions of negative emotion and identify patterns of further communication associated with different responses. Qualitative analysis of physician-patient admission encounters audio-recorded between August 2008 and March 2009 at two hospitals within a university system. A codebook was iteratively developed to identify patients' verbal expressions of negative emotion. We categorized physicians' responses by their immediate effect on further discussion of emotion - focused away (away), focused neither toward nor away (neutral), and focused toward (toward) - and examined further communication patterns following each response type. In 79 patients' encounters with 27 physicians, the median expression of negative emotion was 1, range 0-14. Physician responses were 25% away, 43% neutral, and 32% toward. Neutral and toward responses elicited patient perspectives, concerns, social and spiritual issues, and goals for care. Toward responses demonstrated physicians' support, contributing to physician-patient alignment and agreement about treatment. Responding to expressions of negative emotion neutrally or with statements that focus toward emotion elicits clinically relevant information and is associated with positive physician-patient relationship and care outcomes. Providers should respond to expressions of negative emotion with statements that allow for or explicitly encourage further discussion of emotion. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Marfan syndrome patient experiences as ascertained through postings on social media sites.
Kelleher, Erin; Giampietro, Philip F; Moreno, Megan A
2015-11-01
Marfan syndrome (MS) is a connective tissue disorder that affects thousands of adolescents [Population Reference Bureau, 2013]. Some adolescent patients with MS may use social media to express their experiences and emotions, but little is known about what patients choose to share online. To investigate social media content related to Marfan syndrome we used search terms "Marfan syndrome" and "Marfans" on six different social media sites. The top five recent and popular posts for each site were collected and coded weekly for five weeks. Posts were excluded if they were reshared content or not in English. A codebook was developed using an iterative process to categorize posts and comments. Out of 300 posts collected 147 posts (49.0%) were included for evaluation. Categories of displayed content included personal pictures, memes and pictures featuring symptoms of MS (41.5%) and personal MS experiences (27.1% of posts). One quarter of the posts specifically mentioned a positive experience or how thankful the profile owner was for their life. A unique category of posts (13.7%) referenced Austin Carlile, a celebrity singer with MS, as a role model. Physicians and healthcare providers may consider using social media to understand common MS concerns and to place future health education materials. © 2015 Wiley Periodicals, Inc.
Sex Differences in Hookah-Related Images Posted on Tumblr: A Content Analysis.
Primack, Brian A; Carroll, Mary V; Shensa, Ariel; Davis, Wesley; Levine, Michele D
2016-01-01
Hookah tobacco smoking is prevalent, widespread, and associated with large amounts of toxicants. Hookah tobacco smoking may be viewed differently by males and females. For example, females have been drawn to types of tobacco that are flavored, milder, and marketed as more social and exotic. Individuals often use the growing segment of anonymous social networking sites, such as Tumblr, to learn about potentially dangerous or harmful behaviors. We used a systematic process involving stratification by time of day, day of week, and search term to gather a sample of 140 Tumblr posts related to hookah tobacco smoking. After a structured codebook development process, 2 coders independently assessed all posts in their entirety, and all disagreements were easily adjudicated. When data on poster sex and age were available, we found that 77% of posts were posted by females and 35% were posted by individuals younger than 18. The most prominent features displayed in all posts were references to or images of hookahs themselves, sexuality, socializing, alcohol, hookah smoke, and tricks performed with hookah smoke. Compared with females, males more frequently posted images of hookahs and alcohol-related images or references. This information may help guide future research in this area and the development of targeted interventions to curb this behavior.
White, Jaclyn M.; Dunham, Emilia; Rowley, Blake; Reisner, Sari L.; Mimiaga, Matthew J.
2015-01-01
Sexually explicit media may perpetuate racial and sexual norms among men who have sex with men. While men may be exposed to sexually explicit media in the online settings where they seek sex with other men, no studies to our knowledge have explored the relationship between the racial and sexual content of advertisements appearing in these spaces. In 2011, 217 sexually explicit advertisements on a male sex-seeking website were coded for themes, actor characteristics, and sexual acts depicted using a detailed codebook. Multivariable logistic regression models examined the association between skin colour, theme, sexual acts, and condomless sex acts. Nearly half (45%) featured a ‘thug’ theme (style emphasising Black masculinity/hip-hop culture), 21% featured a college theme, and 44% featured condomless sex. Ads featuring only Black men, ads featuring Black men with men of other skin tones, and ads depicting a thug theme were positively associated with depictions of condomless sex. Online sexually explicit ads featuring Black themes and actors more frequently depicted risky sex than ads with White men alone. Future research should examine whether risky depictions of Black men in online ads influence the sexual norms and cognitions of Black men who have sex with men and their partners. PMID:25891135
Dermatology within the UK podiatric literature: a content analysis (1989-2010)
2013-01-01
Background Although dermatology, as a medical subject, has been a facet of the training and education of podiatrists for many years, it is, arguably, only in recent years that the speciality of podiatric dermatology has emerged within the profession. Some indication of this gradual development may be identified through a content analysis of the podiatric literature in the UK, spanning a 21 year timeframe. Method 6 key professional journals were selected for content analysis in order to provide a picture of the emergence and development of podiatric dermatology over a period extending from 1989 to 2010. Both syntactical and thematic unitization were deployed in the analysis, revealing both manifest and latent content. Categories were devised using a prior coding, a codebook produced to define relevant concepts and category characteristics, and the coding scheme subject to an assessment of reliability. Results 1611 units appeared in the 6 journals across a 21 year timeframe. 88% (n = 1417) occurred in one journal (Podiatry Now and its predecessors). Modal categories within all journals included course adverts (n = 673), commercial adverts (n = 562) and articles by podiatrists (n = 133). There was an overall rise from 40 per annum in 1989, to over 100 in 2010. A wider range of dermatological topics were addressed, ranging from fungal nail infections to melanoma. Conclusions It is evident from this analysis that there has been an increasing focus on dermatology as a topic within the main podiatric journals in the UK over the last 21 years, primarily reflecting a rise in commercial advertising and an increase in academic dermatology related publications. Whilst earlier publications tended to focus on warts and fungal infections, more recent publications address a broader spectrum of topics. Changes in prescribing rights may be relevant to these findings, as may the enhanced professional and regulatory body requirements on continuing professional development. PMID:23705878
Hazelton, Patrick T.; Steward, Wayne T.; Collins, Shane P.; Gaffney, Stuart; Morin, Stephen F.; Arnold, Emily A.
2014-01-01
Background In preparation for full Affordable Care Act implementation, California has instituted two healthcare initiatives that provide comprehensive coverage for previously uninsured or underinsured individuals. For many people living with HIV, this has required transition either from the HIV-specific coverage of the Ryan White program to the more comprehensive coverage provided by the county-run Low-Income Health Programs or from Medicaid fee-for-service to Medicaid managed care. Patient advocates have expressed concern that these transitions may present implementation challenges that will need to be addressed if ambitious HIV prevention and treatment goals are to be achieved. Methods 30 semi-structured, in-depth interviews were conducted between October, 2012, and February, 2013, with policymakers and providers in 10 urban, suburban, and rural California counties. Interview topics included: continuity of patient care, capacity to handle payer source transitions, and preparations for healthcare reform implementation. Study team members reviewed interview transcripts to produce emergent themes, develop a codebook, build inter-rater reliability, and conduct analyses. Results Respondents supported the goals of the ACA, but reported clinic and policy-level challenges to maintaining patient continuity of care during the payer source transitions. They also identified strategies for addressing these challenges. Areas of focus included: gaps in communication to reach patients and develop partnerships between providers and policymakers, perceived inadequacy in new provider networks for delivering quality HIV care, the potential for clinics to become financially insolvent due to lower reimbursement rates, and increased administrative burdens for clinic staff and patients. Conclusions California's new healthcare initiatives represent ambitious attempts to expand and improve health coverage for low-income individuals. The state's challenges in maintaining quality care and treatment for people living with HIV experiencing these transitions demonstrate the importance of setting effective policies in anticipation of full ACA implementation in 2014. PMID:24599337
Albino, Sandra; Tabb, Karen M.; Requena, David; Egoavil, Miguel; Pineros-Leano, Maria F.; Zunt, Joseph R.; García, Patricia J.
2014-01-01
Background Tuberculosis (TB) is global health concern and a leading infectious cause of mortality. Reversing TB incidence and disease-related mortality is a major global health priority. Infectious disease mortality is directly linked to failure to adhere to treatments. Using technology to send reminders by short message services have been shown to improve treatment adherence. However, few studies have examined tuberculosis patient perceptions and attitudes towards using SMS technology to increase treatment adherence. In this study, we sought to investigate perceptions related to feasibility and acceptability of using text messaging to improve treatment adherence among adults who were receiving treatment for TB in Callao, Peru. Methods We conducted focus group qualitative interviews with current TB positive and non-contagious participants to understand the attitudes, perceptions, and feasibility of using short message service (SMS) reminders to improve TB treatment adherence. Subjects receiving care through the National TB Program were recruited through public health centers in Ventanilla, Callao, Peru. In four focus groups, we interviewed 16 patients. All interviews were recorded and transcribed verbatim. Thematic network analysis and codebook techniques were used to analyze data. Results Three major themes emerged from the data: limits on health literacy and information posed challenges to successful TB treatment adherence, treatment motivation at times facilitated adherence to TB treatment, and acceptability of SMS including positive perceptions of SMS to improve TB treatment adherence. The majority of patients shared considerations about how to effectively and confidentially administer an SMS intervention with TB positive participants. Conclusion The overall perceptions of the use of SMS were positive and indicated that SMS technology may be an efficient way to transmit motivational texts on treatment, health education information, and simple reminders to increase treatment adherence for low-income TB patients living in Peru. PMID:24828031
“Fitspiration” on Social Media: A Content Analysis of Gendered Images
Prichard, Ivanka; Lim, Megan Su Cheng
2017-01-01
Background “Fitspiration” (also known as “fitspo”) aims to inspire individuals to exercise and be healthy, but emerging research indicates exposure can negatively impact female body image. Fitspiration is frequently accessed on social media; however, it is currently unclear the degree to which messages about body image and exercise differ by gender of the subject. Objective The aim of our study was to conduct a content analysis to identify the characteristics of fitspiration content posted across social media and whether this differs according to subject gender. Methods Content tagged with #fitspo across Instagram, Facebook, Twitter, and Tumblr was extracted over a composite 30-minute period. All posts were analyzed by 2 independent coders according to a codebook. Results Of the 415/476 (87.2%) relevant posts extracted, most posts were on Instagram (360/415, 86.8%). Most posts (308/415, 74.2%) related thematically to exercise, and 81/415 (19.6%) related thematically to food. In total, 151 (36.4%) posts depicted only female subjects and 114/415 (27.5%) depicted only male subjects. Female subjects were typically thin but toned; male subjects were often muscular or hypermuscular. Within the images, female subjects were significantly more likely to be aged under 25 years (P<.001) than the male subjects, to have their full body visible (P=.001), and to have their buttocks emphasized (P<.001). Male subjects were more likely to have their face visible in the post (P=.005) than the female subjects. Female subjects were more likely to be sexualized than the male subjects (P=.002). Conclusions Female #fitspo subjects typically adhered to the thin or athletic ideal, and male subjects typically adhered to the muscular ideal. Future research and interventional efforts should consider the potential objectifying messages in fitspiration, as it relates to both female and male body image. PMID:28356239
Collins, Patricia A; Abelson, Julia; Pyman, Heather; Lavis, John N
2006-07-01
News media effects on their audiences are complex. Four commonly cited effects are: informing audiences; agenda-setting; framing; and persuading. The release in autumn 2002 of two reports on options for reforming Canada's healthcare system attracted widespread media attention. We explored the potential for each of the four media effects by examining Canadian newspaper representation of this healthcare policy debate. Clippings were gathered from regional and national newspapers. Two data collection methodologies were employed: the first involved two staggered "constructed weeks" designed to capture thematic news framing styles; the second collected "intensive" or episodic coverage immediately following the report releases. Health reform articles with a financing and/or delivery focus were included. Using a codebook, articles were coded to track article characteristics, tone, healthcare sector and reform themes, and key actors. A greater quantity of episodic (n=341 clippings) versus thematic coverage (n=77) was documented. Coverage type did not vary significantly by newspaper, reporting source (e.g., staff reporter versus staff editorialist) or article type (e.g., news versus letter). Thematic articles were significantly shorter in length compared to episodic clippings. Episodic coverage tended to have a positive tone, while thematic coverage ranged in tone. Most coverage was general in scope. Sector-specific coverage favoured physician and hospital care--the two providers accorded privileged financing arrangements under Canada's universal, provincially administered health-insurance plans. Coverage of healthcare financing arrangements favoured broad discussions of publicly financed healthcare, federal-provincial governmental relations, and the Canada Health Act that governs provincial plans. Governmental actors and the political institutions that they represent were the dominant actors. Professional associations were also visible, but played a less dominant role. Given its non-specific scope, it is unclear how informative this coverage was. The large quantity and short duration of the episodic coverage, and the preponderance of governmental actors, suggests these newspapers acted as conduits for the policy agenda. Differences in framing styles were observed by coverage type, newspaper, reporting source, article length and type of article. Finally, the dominance of governmental actors provided these actors with numerous opportunities to persuade the public.
Dombrowski, Julia C; Carey, James W; Pitts, Nicole; Craw, Jason; Freeman, Arin; Golden, Matthew R; Bertolli, Jeanne
2016-06-10
U.S. health departments have not historically used HIV surveillance data for disease control interventions with individuals, but advances in HIV treatment and surveillance are changing public health practice. Many U.S. health departments are in the early stages of implementing "Data to Care" programs to assists persons living with HIV (PLWH) with engaging in care, based on information collected for HIV surveillance. Stakeholder engagement is a critical first step for development of these programs. In Seattle-King County, Washington, the health department conducted interviews with HIV medical care providers and PLWH to inform its Data to Care program. This paper describes the key themes of these interviews and traces the evolution of the resulting program. Disease intervention specialists conducted individual, semi-structured qualitative interviews with 20 PLWH randomly selected from HIV surveillance who had HIV RNA levels >10,000 copies/mL in 2009-2010. A physician investigator conducted key informant interviews with 15 HIV medical care providers. Investigators analyzed de-identified interview transcripts, developed a codebook of themes, independently coded the interviews, and identified codes used most frequently as well as illustrative quotes for these key themes. We also trace the evolution of the program from 2010 to 2015. PLWH generally accepted the idea of the health department helping PLWH engage in care, and described how hearing about the treatment experiences of HIV seropositive peers would assist them with engagement in care. Although many physicians were supportive of the Data to Care concept, others expressed concern about potential health department intrusion on patient privacy and the patient-physician relationship. Providers emphasized the need for the health department to coordinate with existing efforts to improve patient engagement. As a result of the interviews, the Data to Care program in Seattle-King County was designed to incorporate an HIV-positive peer component and to ensure coordination with HIV care providers in the process of relinking patients to care. Health departments can build support for Data to Care efforts by gathering input of key stakeholders, such as HIV medical and social service providers, and coordinating with clinic-based efforts to re-engage patients in care.
Lebeau, Jean-Pierre; Cadwallader, Jean-Sébastien; Vaillant-Roussel, Hélène; Pouchain, Denis; Yaouanc, Virginie; Aubin-Auger, Isabelle; Mercier, Alain; Rusch, Emmanuel; Remmen, Roy; Vermeire, Etienne; Hendrickx, Kristin
2016-05-13
To construct a typology of general practitioners' (GPs) responses regarding their justification of therapeutic inertia in cardiovascular primary prevention for high-risk patients with hypertension. Empirically grounded construction of typology. Types were defined by attributes derived from the qualitative analysis of GPs' reported reasons for inaction. 256 GPs randomised in the intervention group of a cluster randomised controlled trial. GPs members of 23 French Regional Colleges of Teachers in General Practice, included in the EffectS of a multifaceted intervention on CArdiovascular risk factors in high-risk hyPErtensive patients (ESCAPE) trial. The database consisted of 2638 written responses given by the GPs to an open-ended question asking for the reasons why drug treatment was not changed as suggested by the national guidelines. All answers were coded using constant comparison analysis. A matrix analysis of codes per GP allowed the construction of a response typology, where types were defined by codes as attributes. Initial coding and definition of types were performed independently by two teams. Initial coding resulted in a list of 69 codes in the final codebook, representing 4764 coded references in the question responses. A typology including seven types was constructed. 100 GPs were allocated to one and only one of these types, while 25 GPs did not provide enough data to allow classification. Types (numbers of GPs allocated) were: 'optimists' (28), 'negotiators' (20), 'checkers' (15), 'contextualisers' (13), 'cautious' (11), 'rounders' (8) and 'scientists' (5). For the 36 GPs that provided 50 or more coded references, analysis of the code evolution over time and across patients showed a consistent belonging to the initial type for any given GP. This typology could provide GPs with some insight into their general ways of considering changes in the treatment/management of cardiovascular risk factors and guide design of specific physician-centred interventions to reduce inappropriate inaction. NCT00348855. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
MacDonald, Joanna Petrasek; Ford, James D.; Willox, Ashlee Cunsolo; Ross, Nancy A.
2013-01-01
Objectives To review the protective factors and causal mechanisms which promote and enhance Indigenous youth mental health in the Circumpolar North. Study design A systematic literature review of peer-reviewed English-language research was conducted to systematically examine the protective factors and causal mechanisms which promote and enhance Indigenous youth mental health in the Circumpolar North. Methods This review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, with elements of a realist review. From 160 records identified in the initial search of 3 databases, 15 met the inclusion criteria and were retained for full review. Data were extracted using a codebook to organize and synthesize relevant information from the articles. Results More than 40 protective factors at the individual, family, and community levels were identified as enhancing Indigenous youth mental health. These included practicing and holding traditional knowledge and skills, the desire to be useful and to contribute meaningfully to one's community, having positive role models, and believing in one's self. Broadly, protective factors at the family and community levels were identified as positively creating and impacting one's social environment, which interacts with factors at the individual level to enhance resilience. An emphasis on the roles of cultural and land-based activities, history, and language, as well as on the importance of social and family supports, also emerged throughout the literature. More than 40 protective factors at the individual, family, and community levels were identified as enhancing Indigenous youth mental health. These included practicing and holding traditional knowledge and skills, the desire to be useful and to contribute meaningfully to one's community, having positive role models, and believing in one's self. Broadly, protective factors at the family and community levels were identified as positively creating and impacting one's social environment, which interacts with factors at the individual level to enhance resilience. An emphasis on the roles of cultural and land-based activities, history, and language, as well as on the importance of social and family supports, also emerged throughout the literature. Conclusions Healthy communities and families foster and support youth who are resilient to mental health challenges and able to adapt and cope with multiple stressors, be they social, economic, or environmental. Creating opportunities and environments where youth can successfully navigate challenges and enhance their resilience can in turn contribute to fostering healthy Circumpolar communities. Looking at the role of new social media in the way youth communicate and interact is one way of understanding how to create such opportunities. Youth perspectives of mental health programmes are crucial to developing appropriate mental health support and meaningful engagement of youth can inform locally appropriate and culturally relevant mental health resources, programmes and community resilience strategies. PMID:24350066
Single-user MIMO versus multi-user MIMO in distributed antenna systems with limited feedback
NASA Astrophysics Data System (ADS)
Schwarz, Stefan; Heath, Robert W.; Rupp, Markus
2013-12-01
This article investigates the performance of cellular networks employing distributed antennas in addition to the central antennas of the base station. Distributed antennas are likely to be implemented using remote radio units, which is enabled by a low latency and high bandwidth dedicated link to the base station. This facilitates coherent transmission from potentially all available antennas at the same time. Such distributed antenna system (DAS) is an effective way to deal with path loss and large-scale fading in cellular systems. DAS can apply precoding across multiple transmission points to implement single-user MIMO (SU-MIMO) and multi-user MIMO (MU-MIMO) transmission. The throughput performance of various SU-MIMO and MU-MIMO transmission strategies is investigated in this article, employing a Long-Term evolution (LTE) standard compliant simulation framework. The previously theoretically established cell-capacity improvement of MU-MIMO in comparison to SU-MIMO in DASs is confirmed under the practical constraints imposed by the LTE standard, even under the assumption of imperfect channel state information (CSI) at the base station. Because practical systems will use quantized feedback, the performance of different CSI feedback algorithms for DASs is investigated. It is shown that significant gains in the CSI quantization accuracy and in the throughput of especially MU-MIMO systems can be achieved with relatively simple quantization codebook constructions that exploit the available temporal correlation and channel gain differences.
Visual Semantic Based 3D Video Retrieval System Using HDFS.
Kumar, C Ranjith; Suguna, S
2016-08-01
This paper brings out a neoteric frame of reference for visual semantic based 3d video search and retrieval applications. Newfangled 3D retrieval application spotlight on shape analysis like object matching, classification and retrieval not only sticking up entirely with video retrieval. In this ambit, we delve into 3D-CBVR (Content Based Video Retrieval) concept for the first time. For this purpose, we intent to hitch on BOVW and Mapreduce in 3D framework. Instead of conventional shape based local descriptors, we tried to coalesce shape, color and texture for feature extraction. For this purpose, we have used combination of geometric & topological features for shape and 3D co-occurrence matrix for color and texture. After thriving extraction of local descriptors, TB-PCT (Threshold Based- Predictive Clustering Tree) algorithm is used to generate visual codebook and histogram is produced. Further, matching is performed using soft weighting scheme with L 2 distance function. As a final step, retrieved results are ranked according to the Index value and acknowledged to the user as a feedback .In order to handle prodigious amount of data and Efficacious retrieval, we have incorporated HDFS in our Intellection. Using 3D video dataset, we future the performance of our proposed system which can pan out that the proposed work gives meticulous result and also reduce the time intricacy.
NASA Astrophysics Data System (ADS)
Justin-Johnson, Carolyn
This qualitative study explored the persistence experiences of African-American women science graduates of a predominantly White institution (PWI). The purpose of the study was to promote a holistic understanding---complementing findings from quantitative studies---of how African-American women give meaning to their collegiate experiences. Eight recent graduates of two college science programs (biological sciences and chemistry) were selected to participate in the study because of their willingness to answer interview questions related to sensitive issues about their experiences. Data analysis included coding transcripts, creating a codebook, memo writing, and constructing a model single-case event-flow network and a conceptually clustered matrix. Participants in the study shared a common viewpoint about the unwelcoming and non-supportive environment that they navigated to persist to graduation. Specifically, they identified a combination of (1) non-supportive mechanisms that could have been deterrents to their persistence and (2) supportive mechanisms that were instrumental in helping them to cope with negative experiences on campus that made them feel "uncomfortable" and alienated as one of the few African-American women in science classes. Findings in this study suggest that it is imperative for predominantly White institutions to organize reform efforts around creating more welcoming and inclusive campus environments, especially in the sciences, for African-American women---thus promoting satisfying college experiences that lead to degree and career attainment.
Birge, Max; Duffy, Stephen; Miler, Joanna Astrid; Hajek, Peter
2017-11-04
The 'conversion rate' from initial experimentation to daily smoking is a potentially important metric of smoking behavior, but estimates of it based on current representative data are lacking. The Global Health Data Exchange was searched for representative surveys conducted in English speaking, developed countries after year 2000 that included questions about ever trying a cigarette and ever smoking daily. The initial search identified 2776 surveys that were further screened for language, location, year, sample size, survey structure and representativeness. 44 surveys that passed the screening process were accessed and their codebooks were examined to see whether the two questions of interest were included. Eight datasets allowed extraction or estimation of relevant information. Survey quality was assessed with regards to response rates, sampling methods and data collection procedures. PRISMA guidelines were followed, with explicit rules for approaching derived variables and skip patterns. Proportions were pooled using random effects meta-analysis. The eight surveys used representative samples of the general adult population. Response rates varied from 45% to 88%. Survey methods were on par with the best practice in this field. Altogether 216,314 respondents were included of whom 60.3% (95%CI 51.3-69.3) ever tried a cigarette. Among those, 68.9% (95% CI 60.9-76.9%) progressed to daily smoking. Over two thirds of people who try one cigarette become, at least temporarily, daily smokers. The finding provides strong support for the current efforts to reduce cigarette experimentation among adolescents. The transition from trying the first cigarette through occasional to daily smoking usually implies that a recreational activity is turning into a compulsive need that has to be satisfied virtually continuously. The 'conversion rate' from initial experimentation to daily smoking is thus a potentially important metric of smoking behavior, but estimates of it based on representative data are lacking. The present meta analysis addressed this gap. Currently, about two thirds of non-smokers experimenting with cigarettes progress to daily smoking. The finding supports strongly the current efforts to reduce cigarette experimentation among adolescents. © The Author 2017. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Mack, Natasha; Evens, Emily M; Tolley, Elizabeth E; Brelsford, Kate; Mackenzie, Caroline; Milford, Cecilia; Smit, Jennifer A; Kimani, Joshua
2014-01-01
Stakeholders continue to discuss the appropriateness of antiretroviral-based pre-exposure prophylaxis (PrEP) for HIV prevention among sub-Saharan African and other women. In particular, women need formulations they can adhere to given that effectiveness has been found to correlate with adherence. Evidence from family planning shows that contraceptive use, continuation and adherence may be increased by expanding choices. To explore the potential role of choice in women's use of HIV prevention methods, we conducted a secondary analysis of research with female sex workers (FSWs) and men and women in serodiscordant couples (SDCs) in Kenya, and adolescent and young women in South Africa. Our objective here is to present their interest in and preferences for PrEP formulations - pills, gel and injectable. In this qualitative study, in Kenya we conducted three focus groups with FSWs, and three with SDCs. In South Africa, we conducted two focus groups with adolescent girls, and two with young women. All focus groups were audio-recorded, transcribed and translated into English as needed. We structurally and thematically coded transcripts using a codebook and QSR NVivo 9.0; generated code reports; and conducted inductive thematic analysis to identify major trends and themes. All groups expressed strong interest in PrEP products. In Kenya, FSWs said the products might help them earn more money, because they would feel safer accepting more clients or having sex without condoms for a higher price. SDCs said the products might replace condoms and reanimate couples' sex lives. Most sex workers and SDCs preferred an injectable because it would last longer, required little intervention and was private. In South Africa, adolescent girls believed it would be possible to obtain the products more privately than condoms. Young women were excited about PrEP but concerned about interactions with alcohol and drug use, which often precede sex. Adolescents did not prefer a particular formulation but noted benefits and limitations of each; young women's preferences also varied. The circumstances and preferences of sub-Saharan African women are likely to vary within and across groups and to change over time, highlighting the importance of choice in HIV prevention methods.
2013-01-01
Background The 2009–10 influenza pandemic was a major public health concern. Vaccination was recommended by the health authorities, but compliance was not optimal and perception of the presumed associated risks was high among the public. The Internet is increasingly being used as a source of health information and advice. The aim of the study was to investigate the characteristics of websites providing information about flu vaccine and the quality of the information provided. Methods Website selection was performed in autumn 2010 by entering eight keywords in two of the most commonly used search engines (Google.com and Yahoo.com). The first three result pages were analysed for each search, giving a total of 480 occurrences. Page rank was evaluated to assess visibility. Websites based on Web 2.0 philosophy, websites merely displaying popular news/articles and single files were excluded from the subsequent analysis. We analysed the selected websites (using WHO criteria) as well as the information provided, using a codebook for pro/neutral websites and a qualitative approach for the adverse ones. Results Of the 89 websites selected, 54 dealt with seasonal vaccination, three with anti-H1N1 vaccination and 32 with both. Rank analysis showed that only classic websites (ones not falling in any other category) and one social network were provided on the first pages by Yahoo; 21 classic websites, six displaying popular news/articles and one blog by Google. Analysis of the selected websites revealed that the majority of them (88.8%) had a positive/neutral attitude to flu vaccination. Pro/neutral websites distinguished themselves from the adverse ones by some revealing features like greater transparency, credibility and privacy protection. Conclusions We found that the majority of the websites providing information on flu vaccination were pro/neutral and gave sufficient information. We suggest that antivaccinationist information may have been spread by a different route, such as via Web 2.0 tools, which may be more prone to the dissemination of “viral” information. The page ranking analysis revealed the crucial role of search engines regarding access to information on the Internet. PMID:23360311
A general framework for sensor-based human activity recognition.
Köping, Lukas; Shirahama, Kimiaki; Grzegorzek, Marcin
2018-04-01
Today's wearable devices like smartphones, smartwatches and intelligent glasses collect a large amount of data from their built-in sensors like accelerometers and gyroscopes. These data can be used to identify a person's current activity and in turn can be utilised for applications in the field of personal fitness assistants or elderly care. However, developing such systems is subject to certain restrictions: (i) since more and more new sensors will be available in the future, activity recognition systems should be able to integrate these new sensors with a small amount of manual effort and (ii) such systems should avoid high acquisition costs for computational power. We propose a general framework that achieves an effective data integration based on the following two characteristics: Firstly, a smartphone is used to gather and temporally store data from different sensors and transfer these data to a central server. Thus, various sensors can be integrated into the system as long as they have programming interfaces to communicate with the smartphone. The second characteristic is a codebook-based feature learning approach that can encode data from each sensor into an effective feature vector only by tuning a few intuitive parameters. In the experiments, the framework is realised as a real-time activity recognition system that integrates eight sensors from a smartphone, smartwatch and smartglasses, and its effectiveness is validated from different perspectives such as accuracies, sensor combinations and sampling rates. Copyright © 2018 Elsevier Ltd. All rights reserved.
Successful Aging Among African American Older Adults With Type 2 Diabetes.
Chard, Sarah; Harris-Wallace, Brandy; Roth, Erin G; Girling, Laura M; Rubinstein, Robert; Reese, Ashanté M; Quinn, Charlene C; Eckert, J Kevin
2017-03-01
Rowe and Kahn's concept of successful aging remains an important model of well-being; additional research is needed, however, to identify how economically and socially disadvantaged older adults experience well-being, including the role of life events. The findings presented here help address this gap by examining the subjective construction of well-being among urban African American adults (age ≥ 50) with Type 2 diabetes. As part of the National Institute on Aging-funded Subjective Experience of Diabetes among Urban Older Adults study, ethnographers interviewed African American older adults with diabetes (n = 41) using an adaptation of the McGill Illness Narrative Interview. Data were coded using an inductively derived codebook. Codes related to aging, disease prognosis, and "worldview" were thematically analyzed to identify constructions of well-being. Participants evaluate their well-being through comparisons to the past and to the illnesses of friends and family. Diabetes self-care motivates social engagement and care of others. At times, distrust of medical institutions means well-being also is established through nonadherence to suggested biomedical treatment. Hardship and illness in participants' lives frame their diabetes experience and notions of well-being. Providers need to be aware of the social, economic, and political lenses shaping diabetes self-management and subjective well-being. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Jaggi, S.
1993-01-01
A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.
Callaghan, Katharine A; Fanning, Joseph B
2018-02-01
In the setting of end-of-life care, biases can interfere with patient articulation of goals and hinder provision of patient-centered care. No studies have addressed clinician bias or bias management specific to goals of care discussions at the end of life. To identify and determine the prevalence of palliative care clinician biases and bias management strategies in end-of-life goals of care discussions. A semistructured interview guide with relevant domains was developed to facilitate data collection. Participants were asked directly to identify biases and bias management strategies applicable to this setting. Two researchers developed a codebook to identify themes using a 25% transcript sample through an iterative process based on grounded theory. Inter-rater reliability was evaluated using Cohen κ. It was 0.83, indicating near perfect agreement between coders. The data approach saturation. A purposive sampling of 20 palliative care clinicians in Middle Tennessee participated in interviews. The 20 clinicians interviewed identified 16 biases and 11 bias management strategies. The most frequently mentioned bias was a bias against aggressive treatment (n = 9), described as a clinician's assumption that most interventions at the end of life are not beneficial. The most frequently mentioned bias management strategy was self-recognition of bias (n = 17), described as acknowledging that bias is present. This is the first study identifying palliative care clinicians' biases and bias management strategies in end-of-life goals of care discussions.
Prevalence of Marijuana-Related Traffic on Twitter, 2012–2013: A Content Analysis
Thompson, Leah; Rivara, Frederick P.
2015-01-01
Abstract This study assessed marijuana-related content posted by adolescents on Twitter and examined content variation before and after the 2012 U.S. election legalizing recreational use in two states. For two 3-week periods occurring 6 months before and after the election, a 1% random sample was obtained of all tweets matching a set of marijuana-related queries. Original content was separated from reposted content (retweets), and foreign language tweets and those not related to marijuana were excluded. Using a structured codebook, tweet content was categorized (e.g., mention of personal marijuana use, parents' views, perceived effects.) Self-reported age was extracted from tweet metadata when available. Chi-square tests were used to assess differences in content by whether the user self-identified as an adolescent and to compare content pre- versus post-election. The full sample consisted of 71,901 tweets. After excluding nonrelevant tweets and separating original tweets from retweets, the analytic sample included 36,969 original tweets. A majority (65.6%) of original tweets by adolescents (n=1,928) reflected a positive attitude toward marijuana, and 42.9% indicated personal use. Of adolescents' tweets that mentioned parents, 36.0% of tweets indicated parental support for the adolescent's marijuana use. Tweets about personal marijuana use increased from 2012 to 2013, as did positive perceptions about the drug. Adolescents and others on Twitter are exposed to positive discussion normalizing use. Over the study period, Twitter was increasingly used to disclose marijuana use. PMID:26075917
Prevalence of Marijuana-Related Traffic on Twitter, 2012-2013: A Content Analysis.
Thompson, Leah; Rivara, Frederick P; Whitehill, Jennifer M
2015-06-01
This study assessed marijuana-related content posted by adolescents on Twitter and examined content variation before and after the 2012 U.S. election legalizing recreational use in two states. For two 3-week periods occurring 6 months before and after the election, a 1% random sample was obtained of all tweets matching a set of marijuana-related queries. Original content was separated from reposted content (retweets), and foreign language tweets and those not related to marijuana were excluded. Using a structured codebook, tweet content was categorized (e.g., mention of personal marijuana use, parents' views, perceived effects.) Self-reported age was extracted from tweet metadata when available. Chi-square tests were used to assess differences in content by whether the user self-identified as an adolescent and to compare content pre- versus post-election. The full sample consisted of 71,901 tweets. After excluding nonrelevant tweets and separating original tweets from retweets, the analytic sample included 36,969 original tweets. A majority (65.6%) of original tweets by adolescents (n=1,928) reflected a positive attitude toward marijuana, and 42.9% indicated personal use. Of adolescents' tweets that mentioned parents, 36.0% of tweets indicated parental support for the adolescent's marijuana use. Tweets about personal marijuana use increased from 2012 to 2013, as did positive perceptions about the drug. Adolescents and others on Twitter are exposed to positive discussion normalizing use. Over the study period, Twitter was increasingly used to disclose marijuana use.
Computer vision cracks the leaf code
Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A.; Wing, Scott L.; Serre, Thomas
2016-01-01
Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies. PMID:26951664
Shahwan, Shazana; Fauziana, Restria; Satghare, Pratika; Vaingankar, Janhavi; Picco, Louisa; Chong, Siow Ann; Subramaniam, Mythily
2016-12-01
Youths are more likely to rebel against messages perceived to inhibit their independence. In order for antismoking campaigns to be effective with this population, adopting evidence-based strategies is crucial. In this study, we examined youths' reaction to past and ongoing antismoking campaigns, and delineate effective and ineffective components of campaigns as identified by them. 12 focus group discussions were conducted with 91 youth smokers aged 15-29 years. Data were analysed using qualitative content analysis. A codebook was derived through an iterative process. The data were coded systematically by three coders, using Nvivo V.10. Fear appeals that had no immediate relevance to youths, and campaigns involving humour or sports/dance activities that distracted youths from the antismoking messages, were deemed ineffective. In contrast, elements identified to be efficacious were: positive tone, low-fear visual images, 'low-controlling language' and a genuine spokesperson. Youth tended to favour campaigns circulating on social media platforms. Importantly, youths voiced a lack of tangible support for their efforts to quit smoking. Participants expressed a preference towards antismoking messages that were less authoritative, and perceived a distinct lack of support for their intentions to quit smoking. There is room for incorporating suggestions by participants in future antismoking campaigns. Future research is needed to identify barriers to accessing available support. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Qualitative exploration of a smoking cessation trial for people living with HIV in South Africa.
Krishnan, Nandita; Gittelsohn, Joel; Ross, Alexandra; Elf, Jessica; Chon, Sandy; Niaura, Raymond; Martinson, Neil; Golub, Jonathan E
2017-06-16
In South Africa, people living with HIV (PLWH) have a high prevalence of smoking, which undermines the beneficial effects of antiretroviral therapy (ART). However, little is known about barriers to smoking cessation and what interventions work for PLWH in this setting. A randomized trial comparing intensive anti-smoking counseling versus counseling and nicotine replacement therapy (NRT) was recently concluded in Klerksdorp, South Africa. In a post-trial follow-up, 23 in-depth interviews with patients and one focus group discussion with counselors from the trial were conducted. A codebook was developed and codes were applied to the transcripts, which were analyzed using a thematic analysis. Barriers at the economic, social/interpersonal, and individual levels induced stress, which hindered smoking cessation. Economic stressors included unemployment and poverty. Social or interpersonal stressors were lack of social support for quitting smoking and lack of social support due to having HIV. Individual stressors were traumatic life events. Alcohol was used to cope with stress and frequently co-occurred with smoking. Managing cravings was a barrier unrelated to stress. Participants proposed income and employment opportunities, group counseling and more frequent counseling as solutions to address stressors at different levels. NRT was helpful to mitigate cravings. Future smoking cessation interventions need to target barriers at multiple levels. Increasing the supply and duration of NRT may increase its effectiveness. Other behavioral approaches such as group counseling or peer counseling could hold promise in this setting but need to be tested for efficacy through randomized controlled trials. To our knowledge, this is the first qualitative study examining barriers to smoking cessation for people living with HIV in South Africa. Smoking is highly prevalent among people with HIV in South Africa and cessation interventions are urgently needed. A better understanding of barriers to smoking cessation that people with HIV face will lead to the development of contextually appropriate interventions. This study also provides feedback on interventions from a recently concluded smoking cessation randomized trial and will help guide the design of future smoking cessation trials. © The Author 2017. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
A Qualitative Study of US Clinical Ethics Services: Objectives and Outcomes.
McClimans, Leah; Pressgrove, Geah; Rhea, James
2016-01-01
The quality of clinical ethics services in health care organizations is increasingly seen as an important aspect of the overall quality of care. But measuring this quality is difficult because there is a lack of clarity and consensus regarding the objectives of clinical ethics and the best outcome domains to measure. The aim of this qualitative study is to explore the views of experts about the objectives and outcomes of clinical ethics services in the US. We interviewed 19 experts in clinical ethics, focusing on the appropriate objectives and outcomes of a clinical ethics service (CES). Participants were selected using a purposive snowball sampling strategy. The development of the interview protocol was informed by the clinical ethics literature as well as by research and theories that inform clinical ethics practice. Interviews were conducted by phone, recorded, and transcribed for individual analysis. Analysis proceeded through the development of a codebook of categories using QDA Miner software. Our experts identified 12 objectives and nine outcomes. Some of these identifications were familiar (e.g., mediation and satisfaction) and some were novel (e.g., be of service and transformation). We found that experts are divided in their emphasis on the kinds of objectives that are most important. In terms of outcomes, our experts were concerned with the appropriateness of different proxy and direct measures. This study provides the perspectives of a select group of experts on the objectives and outcomes appropriate for a CES in the United States. The themes identified will be used in future research to inform a Delphi study to refine and obtain expert consensus.
Farrell, Marie; Wallis, Nancy C; Evans, Marci Tyler
2007-01-01
American universities and nursing faculties, caught between the imperatives of community demand and university financial constraints, need to analyze their communities of interests' shared priorities for nursing education. This replication study's objective was to compare the priorities and attitudes of two nursing programs' communities of interest using appreciative inquiry (AI). The researchers used AI to conduct a qualitative, comparative analysis of data from two nursing programs. They used one-on-one and focus group interviews to examine stakeholders' views of the best of the nursing program's past, their vision and approaches to realizing the vision, and their roles in contributing to the vision they created. The researchers analyzed the qualitative data using a standardized codebook and content analysis. Respondents' priorities for both academic programs were similar, with the western respondents emphasizing nursing's contribution to quality care and the southern respondents emphasizing its leadership and commitment to diversity. Both identified the role of legislators and the community in partnering with nursing to secure funds for expansion. Both programs' respondents viewed nursing as a major part of the university and considered their role as supporters of the university's academic and financial goals. The two nursing programs appeared to harness external and internal support in their respective communities. While some priorities differed between the two nursing programs, respondents were aware of the ripple effect of decreased funding for nursing education on the delivery of nursing services to the community. Differences among the undergraduate and graduate students, which reflect a nursing program's student mix, underscore the priorities that nursing programs must emphasize.
Reigniting tobacco ritual: waterpipe tobacco smoking establishment culture in the United States.
Carroll, Mary V; Chang, Judy; Sidani, Jaime E; Barnett, Tracey E; Soule, Eric; Balbach, Edith; Primack, Brian A
2014-12-01
Waterpipe tobacco smoking (WTS) is an increasingly prevalent form of tobacco use in the United States. Its appeal may stem from its social, ritualistic, and aesthetic nature. Our aim in this study was to understand WTS as a social ritual with the goal of informing prevention efforts. We conducted a covert observational study consisting of 38 observation sessions in 11 WTS establishments in 3 U.S. cities. Data collection was based on an established conceptual framework describing ritualistic elements of tobacco use. Iterative codebook development and qualitative thematic synthesis were used to analyze data. Atmospheres ranged from quiet coffee shop to boisterous bar party environments. While some children and older adults were present, the majority of clientele were young adults. Men and women were evenly represented. However, there were 19 occurrences of a male smoking by himself, but no women smoked alone. The vast majority (94%) of the clientele were actively smoking waterpipes. All 83 observed groups manifested at least 1 of the ritual elements of our conceptual framework, while 41 of the 83 observed groups (49%) demonstrated all 4 ritual elements. Despite its heterogeneity, WTS is often characterized by 1 or more established elements of a tobacco-related social ritual. It may be valuable for clinical and public health interventions to acknowledge and address the ritualistic elements and social function of WTS. © The Author 2014. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Interest in the use of computerized patient portals: role of the provider-patient relationship.
Zickmund, Susan L; Hess, Rachel; Bryce, Cindy L; McTigue, Kathleen; Olshansky, Ellen; Fitzgerald, Katharine; Fischer, Gary S
2008-01-01
Bioinformatics experts are developing interactive patient portals to help those living with diabetes and other chronic diseases to better manage their conditions. However, little is known about what influences patients' desires to use this technology. To discern the impact of the provider-patient relationship on interest in using a web-based patient portal. Qualitative analysis of focus groups. Ten focus groups involving 39 patients (range 2-7) recruited from four primary care practices. A qualitative approach was used, which involved reading transcribed texts until a consensus was reached on data interpretation. An intercoder reliability kappa score (0.89) was determined by comparing the provider-patient relationship talk selected by the two coders. A conceptual framework was developed, which involved the development and refinement of a codebook and the application of it to the transcripts. Interest in the portal was linked to dissatisfaction with the provider-patient relationship, including dissatisfaction with provider communication/responsiveness, the inability to obtain medical information, and logistical problems with the office. Disinterest in the portal was linked to satisfaction with the provider-patient relationship, including provider communication/responsiveness, difficulty in using the portal, and fear of losing relationships and e-mail contact with the provider. No patient identified encrypted e-mail communication through the portal as an advantage. Promoting the use of computerized portals requires patient-based adaptations. These should include ease of use, direct provider e-mail, and reassurances that access and interpersonal relationships will not be lost. Education is needed about privacy concerns regarding traditional e-mail communication.
Ladekjær Larsen, Eva; Smorawski, Gitte Andsager; Kragbak, Katrine Lund; Stock, Christiane
2016-04-29
High alcohol consumption among university students is a well-researched health concern in many countries. At universities in Denmark, policies of alcohol consumption are a new phenomenon if existing at all. However, little is known of how students perceive campus alcohol policies. The aim of this study is to explore students' perceptions of alcohol policies on campus in relation to attitudes and practices of alcohol consumption. We conducted six focus group interviews with students from the University of Southern Denmark at two different campuses. The interviews discussed topics such as experiences and attitudes towards alcohol consumption among students, regulations, and norms of alcohol use on campus. The analysis followed a pre-determined codebook. Alcohol consumption is an integrated practice on campus. Most of the participants found it unnecessary to make major restrictions. Instead, regulations were socially controlled by students themselves and related to what was considered to be appropriate behavior. However students were open minded towards smaller limitations of alcohol availability. These included banning the sale of alcohol in vending machines and limiting consumption during the introduction week primarily due to avoiding social exclusion of students who do not drink. Some international students perceived the level of consumption as too high and distinguished between situations where they perceived drinking as unusual. The study showed that alcohol is a central part of students' lives. When developing and implementing alcohol policies on campus, seeking student input in the process and addressing alcohol policies in the larger community will likely improve the success of the policies.
Portrayal of Alcohol Brands Popular Among Underage Youth on YouTube: A Content Analysis.
Primack, Brian A; Colditz, Jason B; Rosen, Eva B; Giles, Leila M; Jackson, Kristina M; Kraemer, Kevin L
2017-09-01
We characterized leading YouTube videos featuring alcohol brand references and examined video characteristics associated with each brand and video category. We systematically captured the 137 most relevant and popular videos on YouTube portraying alcohol brands that are popular among underage youth. We used an iterative process to codebook development. We coded variables within domains of video type, character sociodemographics, production quality, and negative and positive associations with alcohol use. All variables were double coded, and Cohen's kappa was greater than .80 for all variables except age, which was eliminated. There were 96,860,936 combined views for all videos. The most common video type was "traditional advertisements," which comprised 40% of videos. Of the videos, 20% were "guides" and 10% focused on chugging a bottle of distilled spirits. While 95% of videos featured males, 40% featured females. Alcohol intoxication was present in 19% of videos. Aggression, addiction, and injuries were uncommonly identified (2%, 3%, and 4%, respectively), but 47% of videos contained humor. Traditional advertisements represented the majority of videos related to Bud Light (83%) but only 18% of Grey Goose and 8% of Hennessy videos. Intoxication was most present in chugging demonstrations (77%), whereas addiction was only portrayed in music videos (22%). Videos containing humor ranged from 11% for music-related videos to 77% for traditional advertisements. YouTube videos depicting the alcohol brands favored by underage youth are heavily viewed, and the majority are traditional or narrative advertisements. Understanding characteristics associated with different brands and video categories may aid in intervention development.
Patient and Provider Perspectives on a Mind-Body Program for Grieving Older Adults.
Bui, Eric; Chad-Friedman, Emma; Wieman, Sarah; Grasfield, Rachel H; Rolfe, Allison; Dong, Melissa; Park, Elyse R; Denninger, John W
2018-06-01
Spousal bereavement in older age is a major stressor associated with an increase in both mental and physical problems. The Stress Management and Resiliency Training: Relaxation Response Resiliency Program (SMART-3RP) is an 8-week multimodal mind-body program that targets stress and has been found efficacious in decreasing the mental and physical manifestations of stress in varied populations. This qualitative study sought to investigate the relevance, credibility, and feasibility of the SMART-3RP in the community. Focus groups were conducted among both older widowed adults and providers who support them in the community (eg, chaplains, hospice bereavement coordinators). Transcripts were coded independently by coders trained in qualitative research. Codebooks were created based on both general themes and detailed subthemes present in the transcripts. Findings from 4 focus groups revealed a general convergence between the needs of recently widowed older adults reported by widow(er)s and community providers alike and needs identified in the literature. Several components of the SMART-3RP target many of these needs (eg, social support, stress awareness, coping skills), making both community providers and widow(er)s report that the SMART-3RP is logical (89%) and would be helpful (100%) and successful in reducing symptoms (78%). Additionally, all widow(er)s reported a willingness to participate (100%). Feedback from the focus groups was used to adapt the SMART-3RP to improve its relevance to grief-related stress. Our findings suggest that the SMART-3RP may be helpful in decreasing somatic and psychological distress in older adults who have lost a spouse.
Quality of head injury coding from autopsy reports with AIS © 2005 update 2008.
Schick, Sylvia; Humrich, Anton; Graw, Matthias
2018-02-28
ABSTACT Objective: Coding injuries from autopsy reports of traffic accident victims according to Abbreviated Injury Scale AIS © 2005 update 2008 [1] is quite time consuming. The suspicion arose, that many issues leading to discussion between coder and control reader were based on information required by the AIS that was not documented in the autopsy reports. To quantify this suspicion, we introduced an AIS-detail-indicator (AIS-DI). To each injury in the AIS Codebook one letter from A to N was assigned indicating the level of detail. Rules were formulated to receive repeatable assignments. This scheme was applied to a selection of 149 multiply injured traffic fatalities. The frequencies of "not A" codes were calculated for each body region and it was analysed, why the most detailed level A had not been coded. As a first finding, the results of the head region are presented. 747 AIS head injury codes were found in 137 traffic fatalities, and 60% of these injuries were coded with an AIS-DI of level A. There are three different explanations for codes of AIS-DI "not A": Group 1 "Missing information in autopsy report" (5%), Group 2 "Clinical data required by AIS" (20%), and Group 3 "AIS system determined" (15%). Groups 1 and 2 show consequences for the ISS in 25 cases. Other body regions might perform differently. The AIS-DI can indicate the quality of the underlying data basis and, depending on the aims of different AIS users it can be a helpful tool for quality checks.
State Laws on Emergency Holds for Mental Health Stabilization.
Hedman, Leslie C; Petrila, John; Fisher, William H; Swanson, Jeffrey W; Dingman, Deirdre A; Burris, Scott
2016-05-01
Psychiatric emergency hold laws permit involuntary admission to a health care facility of a person with an acute mental illness under certain circumstances. This study documented critical variation in state laws, identified important questions for evaluation research, and created a data set of laws to facilitate the public health law research of emergency hold laws' impact on mental health outcomes. The research team built a 50-state, open-source data set of laws currently governing emergency holds. A protocol and codebook were developed so that the study may be replicated and extended longitudinally, allowing future research to accurately capture changes to current laws. Although every state and the District of Columbia have emergency hold laws, state law varies on the duration of emergency holds, who can initiate an emergency hold, the extent of judicial oversight, and the rights of patients during the hold. The core criterion justifying an involuntary hold is mental illness that results in danger to self or others, but many states have added further specifications. Only 22 states require some form of judicial review of the emergency hold process, and only nine require a judge to certify the commitment before a person is hospitalized. Five states do not guarantee assessment by a qualified mental health professional during the emergency hold. The article highlights variability in state law for emergency holds of persons with acute mental illness. How this variability affects the individual, the treatment system, and law enforcement behavior is unknown. Research is needed to guide policy making and implementation on these issues.
Do cigarette health warning labels comply with requirements: A 14-country study.
Cohen, Joanna E; Brown, Jennifer; Washington, Carmen; Welding, Kevin; Ferguson, Jacqueline; Smith, Katherine C
2016-12-01
The Framework Convention on Tobacco Control, a global health treaty ratified by over 175 countries, calls on countries to ensure that tobacco packages carry health warning labels (HWLs) describing the harmful effects of tobacco use. We assessed the extent of compliance with 14 countries' HWL requirements. Unique cigarette packs were purchased in 2013 using a systematic protocol in 12 distinct neighborhoods within three of the ten most populous cities in the 14 low- and middle-income countries with the greatest number (count) of smokers. HWL compliance codebooks were developed for each country based on the details of country-specific HWL requirements, with up to four common compliance indicators assessed for each country (location, size, label elements, text size). Packs (n=1859) were double coded for compliance. Compliance was examined by country and pack characteristics, including parent company and brand family. Overall, 72% of coded cigarette packs were compliant with all relevant compliance indicators, ranging from 17% in the Philippines to 94% in Mexico. Compliance was highest for location of the warning (ranging from 75%-100%) and lowest for warning size (ranging from 46%-99%). Compliance was higher for packs bought in high SES neighborhoods, and varied by parent company and brand family. This multi-country study found at least one pack in every country - and many packs in some countries - that were not compliant with key requirements for health warning labels in the country of purchase. Non-compliance may be exacerbating health disparities. Tobacco companies should be held accountable for complying with country HWL requirements. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Albert, David A; Sadowsky, Donald; Papapanou, Panos; Conicella, Mary L; Ward, Angela
2006-01-01
Background Chronic medical conditions have been associated with periodontal disease. This study examined if periodontal treatment can contribute to changes in overall risk and medical expenditures for three chronic conditions [Diabetes Mellitus (DM), Coronary Artery Disease (CAD), and Cerebrovascular Disease (CVD)]. Methods 116,306 enrollees participating in a preferred provider organization (PPO) insurance plan with continuous dental and medical coverage between January 1, 2001 and December 30, 2002, exhibiting one of three chronic conditions (DM, CAD, or CVD) were examined. This study was a population-based retrospective cohort study. Aggregate costs for medical services were used as a proxy for overall disease burden. The cost for medical care was measured in Per Member Per Month (PMPM) dollars by aggregating all medical expenditures by diagnoses that corresponded to the International Classification of Diseases, 9th Edition, (ICD-9) codebook. To control for differences in the overall disease burden of each group, a previously calculated retrospective risk score utilizing Symmetry Health Data Systems, Inc. Episode Risk Groups™ (ERGs) were utilized for DM, CAD or CVD diagnosis groups within distinct dental services groups including; periodontal treatment (periodontitis or gingivitis), dental maintenance services (DMS), other dental services, or to a no dental services group. The differences between group means were tested for statistical significance using log-transformed values of the individual total paid amounts. Results The DM, CAD and CVD condition groups who received periodontitis treatment incurred significantly higher PMPM medical costs than enrollees who received gingivitis treatment, DMS, other dental services, or no dental services (p < .001). DM, CAD, and CVD condition groups who received periodontitis treatment had significantly lower retrospective risk scores (ERGs) than enrollees who received gingivitis treatment, DMS, other dental services, or no dental services (p < .001). Conclusion This two-year retrospective examination of a large insurance company database revealed a possible association between periodontal treatment and PMPM medical costs. The findings suggest that periodontitis treatment (a proxy for the presence of periodontitis) has an impact on the PMPM medical costs for the three chronic conditions (DM, CAD, and CVD). Additional studies are indicated to examine if this relationship is maintained after adjusting for confounding factors such as smoking and SES. PMID:16914052
Balde, Mamadou Diouldé; Bangoura, Abou; Diallo, Boubacar Alpha; Sall, Oumar; Balde, Habibata; Niakate, Aïssatou Sona; Vogel, Joshua P; Bohren, Meghan A
2017-01-13
Reducing maternal morbidity and mortality remains a key health challenge in Guinea. Anecdotal evidence suggests that women in Guinea are subjected to mistreatment during childbirth in health facilities, but limited research exists on this topic. This study was conducted to better understand the social norms and the acceptability of four scenarios of mistreatment during childbirth, from the perspectives of women and service providers. This study used qualitative methods including in-depth interviews (IDIs) and focus group discussions (FGDs) with women of reproductive age, midwives, nurses and doctors. This study was conducted in one urban area (Mamou) and one peri-urban area (Pita) in Guinea. Participants were presented with four scenarios of mistreatment during childbirth, including a provider: (1) slapping a woman; (2) verbally abusing a woman; (3) refusing to help a woman; and (4) forcing a woman to give birth on the floor. Data were collected in local languages (Pular and Malinké) and French, and transcribed and analyzed in French. We used a thematic analysis approach and manually coded the data using a codebook developed for the project. A total of 40 IDIs and eight FGDs were conducted with women of reproductive age, 5 IDIs with doctors, and 13 IDIs with midwives. Most women were not accepting of any of the scenarios, unless the action was perceived to be used to save the life of the mother or child. However, they perceived a woman's disobedience and uncooperativeness to contribute to her poor treatment. Women reacted to this mistreatment by accepting poor treatment, refusal to use the same hospital, revenge against the provider or complaints to hospital management. Service providers were accepting of mistreatment when women were disobedient, uncooperative, or to save the life of the baby. This is the first known study on mistreatment of women during childbirth to be conducted in Guinea. Both women and service providers were accepting of mistreatment during childbirth under certain conditions. Any approach to preventing and eliminating mistreatment during childbirth must consider these important contextual and social norms and develop a comprehensive intervention that addresses root causes. Further research is needed on how to measure mistreatment during childbirth in Guinea.
Friedman, Daniela B; Kim, Sei-Hill; Tanner, Andrea; Bergeron, Caroline D; Foster, Caroline; General, Kevin
2014-07-01
Clinical trials (CTs) are important for advancing public health and medical research, however, CT recruitment is challenging. The high reading level of CT information and the technical language of providers or researchers can serve as barriers to recruitment. Prior studies on the informed consent process found that consent documents often contain complicated terms. Limited research has examined resources specifically used to recruit individuals into CTs. The purpose of this study was to examine the content and readability of CT recruitment education resources in one U.S. state. Convenience sampling was employed for the collection of CT recruitment materials. A codebook was developed based on previous content analyses and emergent themes from statewide focus groups about CTs. A total of 127 materials were collected and analyzed (37.8% print; 62.2% Web). Most content was focused on treatment-related CTs (60.6%). Inclusion criteria related to specific disease conditions (88.9%) and age (73.6%) were described most often. Only 30% of resources had an explicit call to action. Overall mean readability level was Grade 11.7. Web-based materials were significantly more likely to be written at a higher grade level than print materials (p ≤ .0001). Readability also differed significantly according to resource distributor/creator, CT type, person quoted, and presence or absence of inclusion criteria and an explicit call to action. Our study provides insight into the content and difficulty level of recruitment materials intended to provide initial information about a CT. Future studies should examine individuals' comprehension of recruitment materials and how participation intentions are associated with recruitment messages. Copyright © 2014 Elsevier Inc. All rights reserved.
Kannaley, Kristie; Mehta, Shreya; Yelton, Brooks; Friedman, Daniela B
2018-01-01
Limited research takes a socio-biographical approach to study the experiences and perspectives of individuals affected by Alzheimer's disease and related dementias. The purpose of this study was to thematically analyze blog narratives written by people with Alzheimer's disease and related dementia and care partners in order to increase understanding of their experiences. Nineteen blogs written by people with Alzheimer's disease and related dementia and 44 blogs written by care partners were analyzed. The first two authors utilized line-by-line open coding to analyze five posts from each group for the development of a codebook. Using NVivo software, the first author proceeded to code the remaining blogs for emergent themes and subcategories. Emergent themes included (1) effects of Alzheimer's disease and related dementia on the person with Alzheimer's disease and related dementia and/or the care partner; (2) seeing the positives; (3) feeling out of control; (4) advocacy and empowerment; (5) coping mechanisms and compensatory strategies; and (6) candid descriptions of experiences with Alzheimer's disease and related dementia. These themes also encompassed numerous subcategories that are discussed in this paper. Results from this study provide insights into the experiences of individuals affected by Alzheimer's disease and related dementia. Writers discussed several topics that are consistent with research on illness narratives of individuals with chronic diseases, including loss of identity, strategies for coping, and poignant descriptions of life with the disease. This study provides information in the form of overlapping themes from first-person perspectives of numerous individuals affected by Alzheimer's disease and related dementia. This type of data is crucial to understand the experiences of people who live with ADRD.
Latino parents' perceptions of weight terminology used in pediatric weight counseling.
Knierim, Shanna Doucette; Rahm, Alanna Kulchak; Haemer, Matthew; Raghunath, Silvia; Martin, Carmen; Yang, Alyssa; Clarke, Christina; Hambidge, Simon J
2015-01-01
To identify which English and Spanish terms Latino parents consider motivating, as well as culturally and linguistically appropriate, for provider use during weight counseling of overweight and obese Latino youth. Latino parent perceptions of common Spanish and English terms for overweight were discussed with 54 parents in 6 focus groups (3 English, 3 Spanish). Atlas.ti software was used for qualitative analysis. An initial codebook was used to code passages for English and Spanish terminology separately. Subsequent changes to the coded passages and creation of new codes were made by team consensus. "Demasiado peso para su salud" (too much weight for his/her health) was the only phrase for excess weight that was consistently identified as motivating and inoffensive by Spanish-speaking parents. "Sobrepeso" (overweight), a commonly used term among health care providers, was motivating to some parents but offensive to others. English-speaking parents had mixed reactions to "unhealthy weight," "weight problem," and "overweight," finding them motivating, confusing, or insulting. Parents found "fat" "gordo" and "obese" "obeso" consistently offensive. Most participants found growth charts and the term "BMI" confusing. Parents consistently reported that providers could enhance motivation and avoid offending families by linking a child's weight to health risks, particularly diabetes. "Demasiado peso para su salud" (too much weight for his/her health) was motivating to many Spanish-speaking Latino parents. Among English-speaking Latino parents, no single English term emerged as motivating, well-understood, and inoffensive. Linking a child's excess weight with increased health risks was motivating and valuable to many parents regardless of language spoken. Copyright © 2015 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.
A Descriptive Analysis of End-of-Life Conversations With Long-Term Glioblastoma Survivors.
Miranda, Stephen P; Bernacki, Rachelle E; Paladino, Joanna M; Norden, Andrew D; Kavanagh, Jane E; Palmor, Marissa C; Block, Susan D
2018-05-01
Early, high-quality serious illness (SI) conversations are critical for patients with glioblastoma (GBM) but are often mistimed or mishandled. To describe the prevalence, timing, and quality of documented SI conversations and evaluate their focus on patient goals/priorities. Thirty-three patients with GBM enrolled in the control group of a randomized controlled trial of a communication intervention and were followed for 2 years or until death. At baseline, all patients answered a validated question about preferences for life-extending versus comfort-focused care and completed a Life Priorities Survey about their goals/priorities. In this secondary analysis, retrospective chart review was performed for 18 patients with GBM who died. Documented SI conversations were systematically identified and evaluated using a codebook reflecting 4 domains: prognosis, goals/priorities, end-of-life planning, and life-sustaining treatments. Patient goals/priorities were compared to documentation. At baseline, 16 of 24 patients preferred life-extending care. In the Life Priorities Survey, goals/priorities most frequently ranked among the top 3 were "Live as long as possible," "Be mentally aware," "Provide support for family," "Be independent," and "Be at peace." Fifteen of 18 patients had at least 1 documented SI conversation (range: 1-4). Median timing of the first documented SI conversation was 84 days before death (range: 29-231; interquartile range: 46-119). Fifteen patients had documentation about end-of-life planning, with "hospice" and "palliative care" most frequently documented. Five of 18 patients had documentation about their goals. Patients with GBM had multiple goals/priorities with potential treatment implications, but documentation showed SI conversations occurred relatively late and infrequently reflected patient goals/priorities.
Ladin, Keren; Lin, Naomi; Hahn, Emily; Zhang, Gregory; Koch-Weser, Susan; Weiner, Daniel E
2017-08-01
Although shared decision-making (SDM) can better align patient preferences with treatment, barriers remain incompletely understood and the impact on patient satisfaction is unknown. This is a qualitative study with semistructured interviews. A purposive sample of prevalent dialysis patients ≥65 years of age at two facilities in Greater Boston were selected for diversity in time from initiation, race, modality and vintage. A codebook was developed and interrater reliability was 89%. Codes were discussed and organized into themes. A total of 31 interviews with 23 in-center hemodialysis patients, 1 home hemodialysis patient and 7 peritoneal dialysis patients were completed. The mean age was 76 ± 9 years. Two dominant themes (with related subthemes) emerged: decision-making experiences and satisfaction, and barriers to SDM. Subthemes included negative versus positive decision-making experiences, struggling for autonomy, being a 'good patient' and lack of choice. In spite of believing that dialysis initiation should be the patient's choice, no patients perceived that they had made a choice. Patients explained that this is due to the perception of imminent death or that the decision to start dialysis belonged to physicians. Clinicians and family frequently overrode patient preferences, with patient autonomy honored mostly to select dialysis modality. Poor decision-making experiences were associated with low treatment satisfaction. Despite recommendations for SDM, many older patients were unaware that dialysis initiation was voluntary, held mistaken beliefs about their prognosis and were not engaged in decision-making, resulting in poor satisfaction. Patients desired greater information, specifically focusing on the acuity of their choice, prognosis and goals of care. © The Author 2016. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Albino, Sandra; Tabb, Karen M; Requena, David; Egoavil, Miguel; Pineros-Leano, Maria F; Zunt, Joseph R; García, Patricia J
2014-01-01
Tuberculosis (TB) is global health concern and a leading infectious cause of mortality. Reversing TB incidence and disease-related mortality is a major global health priority. Infectious disease mortality is directly linked to failure to adhere to treatments. Using technology to send reminders by short message services have been shown to improve treatment adherence. However, few studies have examined tuberculosis patient perceptions and attitudes towards using SMS technology to increase treatment adherence. In this study, we sought to investigate perceptions related to feasibility and acceptability of using text messaging to improve treatment adherence among adults who were receiving treatment for TB in Callao, Peru. We conducted focus group qualitative interviews with current TB positive and non-contagious participants to understand the attitudes, perceptions, and feasibility of using short message service (SMS) reminders to improve TB treatment adherence. Subjects receiving care through the National TB Program were recruited through public health centers in Ventanilla, Callao, Peru. In four focus groups, we interviewed 16 patients. All interviews were recorded and transcribed verbatim. Thematic network analysis and codebook techniques were used to analyze data. Three major themes emerged from the data: limits on health literacy and information posed challenges to successful TB treatment adherence, treatment motivation at times facilitated adherence to TB treatment, and acceptability of SMS including positive perceptions of SMS to improve TB treatment adherence. The majority of patients shared considerations about how to effectively and confidentially administer an SMS intervention with TB positive participants. The overall perceptions of the use of SMS were positive and indicated that SMS technology may be an efficient way to transmit motivational texts on treatment, health education information, and simple reminders to increase treatment adherence for low-income TB patients living in Peru.
#Proana: Pro-Eating Disorder Socialization on Twitter.
Arseniev-Koehler, Alina; Lee, Hedwig; McCormick, Tyler; Moreno, Megan A
2016-06-01
Pro-eating disorder (ED) online movements support engagement with ED lifestyles and are associated with negative health consequences for adolescents with EDs. Twitter is a popular social media site among adolescents that provides a unique setting for Pro-ED content to be publicly exchanged. The purpose of this study was to investigate Pro-ED Twitter profiles' references to EDs and how their social connections (followers) reference EDs. A purposeful sample of 45 Pro-ED profiles was selected from Twitter. Profile information, all tweets, and a random sample of 100 of their followers' profile information were collected for content analysis using the Twitter Application Programming Interface. A codebook based on ED screening guidelines was applied to evaluate ED references. For each Pro-ED profile, proportion of tweets with ED references and proportion of followers with ED references in their own profile were evaluated. In total, our 45 Pro-ED profiles generated 4,245 tweets for analysis. A median of 36.4% of profiles' tweets contained ED references. Pro-ED profiles had a median of 173 followers, and a median of 44.5% of followers had ED references. Pro-ED profiles with more tweets with ED references also tended to have more followers with ED references (β = .37, p < .01). Findings suggest that profiles which self-identify as Pro-ED express disordered eating patterns through tweets and have an audience of followers, many of whom also reference ED in their own profiles. ED socialization on Twitter might provide social support, but in the Pro-ED context this activity might also reinforce an ED identity. Copyright © 2016 The Society for Adolescent Health and Medicine. All rights reserved.
Cultural implications of mentoring in sub-Saharan Africa: a qualitative study.
Sawatsky, Adam P; Parekh, Natasha; Muula, Adamson S; Mbata, Ihunanya; Bui, Thuy
2016-06-01
Although many studies have demonstrated the benefits of mentoring in academic medicine, conceptual understanding has been limited to studies performed in North America and Europe. An ecological model of mentoring in academic medicine can provide structure for a broader understanding of the role of culture in mentoring. The goal of this study was to explore the role of culture in the development and maintenance of mentoring relationships within the context of the University of Malawi College of Medicine. A qualitative study using in-depth, semi-structured interviews and thematic analysis was conducted to explore the meaning of mentorship at the study institution. Criterion sampling was used to identify and recruit medical students, interns, registrars and faculty members. Study team members developed a codebook through open coding and applied it to all interview transcripts. Thematic analysis was used to identify and categorise themes according to an ecological model. A total of 46 participants from two major centres in Malawi were interviewed. Themes were identified within three domains: the intrapersonal; the interpersonal, and the institutional. Intrapersonal themes included Malawian politeness, mentoring needs, and friendliness and willingness to help. Interpersonal themes included understanding the role of the mentor, respect for elders, personal and professional boundaries, and perceptions of others. Institutional themes included the supervisor versus mentor, time pressures, tension about the scope of training, and the mentoring cycle. This study highlights the strengths of and challenges imposed by culture to the provision of mentoring relationships at the study institution. It also highlights the central role of culture in mentoring and proposes an updated model for mentoring in academic medicine. This model can inform future research on mentoring and may serve as a model in the larger effort to provide faculty development in mentoring across sub-Saharan Africa. © 2016 John Wiley & Sons Ltd.
Media portrayal of prenatal and postpartum marijuana use in an era of scientific uncertainty.
Jarlenski, Marian; Koma, Jonathan W; Zank, Jennifer; Bodnar, Lisa M; Tarr, Jill A; Chang, Judy C
2018-06-01
Objectives were to characterize how scientific information about prenatal and postpartum marijuana use was presented in online media content, and to assess how media portrayed risks and benefits of such marijuana use. We analyzed online media items (n = 316) from March 2015 to January 2017. A codebook was developed to measure media content in 4 domains: scientific studies, information about health and well-being, mode of ingestion, and portrayal of risks and benefits. Content analysis was performed by two authors, with high inter-rater reliability (mean ĸ = 0.82). Descriptive statistics were used to characterize content, and regression analyses were used to test for predictors of media portrayal of the risk-benefit ratio of prenatal and postpartum marijuana use. 51% of the media items mentioned health risks of prenatal and postpartum marijuana use. Nearly one-third (28%) mentioned marijuana use for treatment of nausea and vomiting in pregnancy. Most media items mentioned a specific research study. More than half of media (59%) portrayed prenatal or postpartum marijuana risks > benefits, 10% portrayed benefits> risks, and the remainder were neutral. While mention of a scientific study was not predictive of the portrayal of the risk-benefit ratio of marijuana use in pregnancy or postpartum, discussion of health risks and health benefits predicted portrayals of the risk-benefit ratio. Online media content about prenatal and postpartum marijuana use presented health risks consistent with evidence, and discussed a health benefit of marijuana use for nausea and vomiting in pregnancy. Portrayal of risks and benefits was somewhat equivocal, consistent with current scientific debate. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.
2010-01-01
Structural design generated by traditional method, optimization method and the stochastic design concept are compared. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions were produced by all the three methods. The variation in the weight calculated by the methods was modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliabilitytraced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.
A flexible layout design method for passive micromixers.
Deng, Yongbo; Liu, Zhenyu; Zhang, Ping; Liu, Yongshun; Gao, Qingyong; Wu, Yihui
2012-10-01
This paper discusses a flexible layout design method of passive micromixers based on the topology optimization of fluidic flows. Being different from the trial and error method, this method obtains the detailed layout of a passive micromixer according to the desired mixing performance by solving a topology optimization problem. Therefore, the dependence on the experience of the designer is weaken, when this method is used to design a passive micromixer with acceptable mixing performance. Several design disciplines for the passive micromixers are considered to demonstrate the flexibility of the layout design method for passive micromixers. These design disciplines include the approximation of the real 3D micromixer, the manufacturing feasibility, the spacial periodic design, and effects of the Péclet number and Reynolds number on the designs obtained by this layout design method. The capability of this design method is validated by several comparisons performed between the obtained layouts and the optimized designs in the recently published literatures, where the values of the mixing measurement is improved up to 40.4% for one cycle of the micromixer.
Aircraft digital control design methods
NASA Technical Reports Server (NTRS)
Powell, J. D.; Parsons, E.; Tashker, M. G.
1976-01-01
Variations in design methods for aircraft digital flight control are evaluated and compared. The methods fall into two categories; those where the design is done in the continuous domain (or s plane) and those where the design is done in the discrete domain (or z plane). Design method fidelity is evaluated by examining closed loop root movement and the frequency response of the discretely controlled continuous aircraft. It was found that all methods provided acceptable performance for sample rates greater than 10 cps except the uncompensated s plane design method which was acceptable above 20 cps. A design procedure based on optimal control methods was proposed that provided the best fidelity at very slow sample rates and required no design iterations for changing sample rates.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-12
... Methods: Designation of Five New Equivalent Methods AGENCY: Office of Research and Development; Environmental Protection Agency (EPA). ACTION: Notice of the designation of five new equivalent methods for...) has designated, in accordance with 40 CFR Part 53, five new equivalent methods, one for measuring...
Bortz, John; Shatz, Narkis
2011-04-01
The recently developed generalized functional method provides a means of designing nonimaging concentrators and luminaires for use with extended sources and receivers. We explore the mathematical relationships between optical designs produced using the generalized functional method and edge-ray, aplanatic, and simultaneous multiple surface (SMS) designs. Edge-ray and dual-surface aplanatic designs are shown to be special cases of generalized functional designs. In addition, it is shown that dual-surface SMS designs are closely related to generalized functional designs and that certain computational advantages accrue when the two design methods are combined. A number of examples are provided. © 2011 Optical Society of America
The application of quadratic optimal cooperative control synthesis to a CH-47 helicopter
NASA Technical Reports Server (NTRS)
Townsend, Barbara K.
1987-01-01
A control-system design method, quadratic optimal cooperative control synthesis (CCS), is applied to the design of a stability and control augmentation system (SCAS). The CCS design method is different from other design methods in that it does not require detailed a priori design criteria, but instead relies on an explicit optimal pilot-model to create desired performance. The design method, which was developed previously for fixed-wing aircraft, is simplified and modified for application to a Boeing CH-47 helicopter. Two SCAS designs are developed using the CCS design methodology. The resulting CCS designs are then compared with designs obtained using classical/frequency-domain methods and linear quadratic regulator (LQR) theory in a piloted fixed-base simulation. Results indicate that the CCS method, with slight modifications, can be used to produce controller designs which compare favorably with the frequency-domain approach.
The application of mixed methods designs to trauma research.
Creswell, John W; Zhang, Wanqing
2009-12-01
Despite the use of quantitative and qualitative data in trauma research and therapy, mixed methods studies in this field have not been analyzed to help researchers designing investigations. This discussion begins by reviewing four core characteristics of mixed methods research in the social and human sciences. Combining these characteristics, the authors focus on four select mixed methods designs that are applicable in trauma research. These designs are defined and their essential elements noted. Applying these designs to trauma research, a search was conducted to locate mixed methods trauma studies. From this search, one sample study was selected, and its characteristics of mixed methods procedures noted. Finally, drawing on other mixed methods designs available, several follow-up mixed methods studies were described for this sample study, enabling trauma researchers to view design options for applying mixed methods research in trauma investigations.
Educating Instructional Designers: Different Methods for Different Outcomes.
ERIC Educational Resources Information Center
Rowland, Gordon; And Others
1994-01-01
Suggests new methods of teaching instructional design based on literature reviews of other design fields including engineering, architecture, interior design, media design, and medicine. Methods discussed include public presentations, visiting experts, competitions, artifacts, case studies, design studios, and internships and apprenticeships.…
Categorisation of visualisation methods to support the design of Human-Computer Interaction Systems.
Li, Katie; Tiwari, Ashutosh; Alcock, Jeffrey; Bermell-Garcia, Pablo
2016-07-01
During the design of Human-Computer Interaction (HCI) systems, the creation of visual artefacts forms an important part of design. On one hand producing a visual artefact has a number of advantages: it helps designers to externalise their thought and acts as a common language between different stakeholders. On the other hand, if an inappropriate visualisation method is employed it could hinder the design process. To support the design of HCI systems, this paper reviews the categorisation of visualisation methods used in HCI. A keyword search is conducted to identify a) current HCI design methods, b) approaches of selecting these methods. The resulting design methods are filtered to create a list of just visualisation methods. These are then categorised using the approaches identified in (b). As a result 23 HCI visualisation methods are identified and categorised in 5 selection approaches (The Recipient, Primary Purpose, Visual Archetype, Interaction Type, and The Design Process). Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Game Methodology for Design Methods and Tools Selection
ERIC Educational Resources Information Center
Ahmad, Rafiq; Lahonde, Nathalie; Omhover, Jean-françois
2014-01-01
Design process optimisation and intelligence are the key words of today's scientific community. A proliferation of methods has made design a convoluted area. Designers are usually afraid of selecting one method/tool over another and even expert designers may not necessarily know which method is the best to use in which circumstances. This…
NASA Technical Reports Server (NTRS)
Yao, Tse-Min; Choi, Kyung K.
1987-01-01
An automatic regridding method and a three dimensional shape design parameterization technique were constructed and integrated into a unified theory of shape design sensitivity analysis. An algorithm was developed for general shape design sensitivity analysis of three dimensional eleastic solids. Numerical implementation of this shape design sensitivity analysis method was carried out using the finite element code ANSYS. The unified theory of shape design sensitivity analysis uses the material derivative of continuum mechanics with a design velocity field that represents shape change effects over the structural design. Automatic regridding methods were developed by generating a domain velocity field with boundary displacement method. Shape design parameterization for three dimensional surface design problems was illustrated using a Bezier surface with boundary perturbations that depend linearly on the perturbation of design parameters. A linearization method of optimization, LINRM, was used to obtain optimum shapes. Three examples from different engineering disciplines were investigated to demonstrate the accuracy and versatility of this shape design sensitivity analysis method.
Experimental design methods for bioengineering applications.
Keskin Gündoğdu, Tuğba; Deniz, İrem; Çalışkan, Gülizar; Şahin, Erdem Sefa; Azbar, Nuri
2016-01-01
Experimental design is a form of process analysis in which certain factors are selected to obtain the desired responses of interest. It may also be used for the determination of the effects of various independent factors on a dependent factor. The bioengineering discipline includes many different areas of scientific interest, and each study area is affected and governed by many different factors. Briefly analyzing the important factors and selecting an experimental design for optimization are very effective tools for the design of any bioprocess under question. This review summarizes experimental design methods that can be used to investigate various factors relating to bioengineering processes. The experimental methods generally used in bioengineering are as follows: full factorial design, fractional factorial design, Plackett-Burman design, Taguchi design, Box-Behnken design and central composite design. These design methods are briefly introduced, and then the application of these design methods to study different bioengineering processes is analyzed.
An overview of very high level software design methods
NASA Technical Reports Server (NTRS)
Asdjodi, Maryam; Hooper, James W.
1988-01-01
Very High Level design methods emphasize automatic transfer of requirements to formal design specifications, and/or may concentrate on automatic transformation of formal design specifications that include some semantic information of the system into machine executable form. Very high level design methods range from general domain independent methods to approaches implementable for specific applications or domains. Applying AI techniques, abstract programming methods, domain heuristics, software engineering tools, library-based programming and other methods different approaches for higher level software design are being developed. Though one finds that a given approach does not always fall exactly in any specific class, this paper provides a classification for very high level design methods including examples for each class. These methods are analyzed and compared based on their basic approaches, strengths and feasibility for future expansion toward automatic development of software systems.
Experimental design methodologies in the optimization of chiral CE or CEC separations: an overview.
Dejaegher, Bieke; Mangelings, Debby; Vander Heyden, Yvan
2013-01-01
In this chapter, an overview of experimental designs to develop chiral capillary electrophoresis (CE) and capillary electrochromatographic (CEC) methods is presented. Method development is generally divided into technique selection, method optimization, and method validation. In the method optimization part, often two phases can be distinguished, i.e., a screening and an optimization phase. In method validation, the method is evaluated on its fit for purpose. A validation item, also applying experimental designs, is robustness testing. In the screening phase and in robustness testing, screening designs are applied. During the optimization phase, response surface designs are used. The different design types and their application steps are discussed in this chapter and illustrated by examples of chiral CE and CEC methods.
[Review of research design and statistical methods in Chinese Journal of Cardiology].
Zhang, Li-jun; Yu, Jin-ming
2009-07-01
To evaluate the research design and the use of statistical methods in Chinese Journal of Cardiology. Peer through the research design and statistical methods in all of the original papers in Chinese Journal of Cardiology from December 2007 to November 2008. The most frequently used research designs are cross-sectional design (34%), prospective design (21%) and experimental design (25%). In all of the articles, 49 (25%) use wrong statistical methods, 29 (15%) lack some sort of statistic analysis, 23 (12%) have inconsistencies in description of methods. There are significant differences between different statistical methods (P < 0.001). The correction rates of multifactor analysis were low and repeated measurement datas were not used repeated measurement analysis. Many problems exist in Chinese Journal of Cardiology. Better research design and correct use of statistical methods are still needed. More strict review by statistician and epidemiologist is also required to improve the literature qualities.
A review of parametric approaches specific to aerodynamic design process
NASA Astrophysics Data System (ADS)
Zhang, Tian-tian; Wang, Zhen-guo; Huang, Wei; Yan, Li
2018-04-01
Parametric modeling of aircrafts plays a crucial role in the aerodynamic design process. Effective parametric approaches have large design space with a few variables. Parametric methods that commonly used nowadays are summarized in this paper, and their principles have been introduced briefly. Two-dimensional parametric methods include B-Spline method, Class/Shape function transformation method, Parametric Section method, Hicks-Henne method and Singular Value Decomposition method, and all of them have wide application in the design of the airfoil. This survey made a comparison among them to find out their abilities in the design of the airfoil, and the results show that the Singular Value Decomposition method has the best parametric accuracy. The development of three-dimensional parametric methods is limited, and the most popular one is the Free-form deformation method. Those methods extended from two-dimensional parametric methods have promising prospect in aircraft modeling. Since different parametric methods differ in their characteristics, real design process needs flexible choice among them to adapt to subsequent optimization procedure.
The application of quadratic optimal cooperative control synthesis to a CH-47 helicopter
NASA Technical Reports Server (NTRS)
Townsend, Barbara K.
1986-01-01
A control-system design method, Quadratic Optimal Cooperative Control Synthesis (CCS), is applied to the design of a Stability and Control Augmentation Systems (SCAS). The CCS design method is different from other design methods in that it does not require detailed a priori design criteria, but instead relies on an explicit optimal pilot-model to create desired performance. The design model, which was developed previously for fixed-wing aircraft, is simplified and modified for application to a Boeing Vertol CH-47 helicopter. Two SCAS designs are developed using the CCS design methodology. The resulting CCS designs are then compared with designs obtained using classical/frequency-domain methods and Linear Quadratic Regulator (LQR) theory in a piloted fixed-base simulation. Results indicate that the CCS method, with slight modifications, can be used to produce controller designs which compare favorably with the frequency-domain approach.
A Rapid Aerodynamic Design Procedure Based on Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Rai, Man Mohan
2001-01-01
An aerodynamic design procedure that uses neural networks to model the functional behavior of the objective function in design space has been developed. This method incorporates several improvements to an earlier method that employed a strategy called parameter-based partitioning of the design space in order to reduce the computational costs associated with design optimization. As with the earlier method, the current method uses a sequence of response surfaces to traverse the design space in search of the optimal solution. The new method yields significant reductions in computational costs by using composite response surfaces with better generalization capabilities and by exploiting synergies between the optimization method and the simulation codes used to generate the training data. These reductions in design optimization costs are demonstrated for a turbine airfoil design study where a generic shape is evolved into an optimal airfoil.
Applications of mixed-methods methodology in clinical pharmacy research.
Hadi, Muhammad Abdul; Closs, S José
2016-06-01
Introduction Mixed-methods methodology, as the name suggests refers to mixing of elements of both qualitative and quantitative methodologies in a single study. In the past decade, mixed-methods methodology has gained popularity among healthcare researchers as it promises to bring together the strengths of both qualitative and quantitative approaches. Methodology A number of mixed-methods designs are available in the literature and the four most commonly used designs in healthcare research are: the convergent parallel design, the embedded design, the exploratory design, and the explanatory design. Each has its own unique advantages, challenges and procedures and selection of a particular design should be guided by the research question. Guidance on designing, conducting and reporting mixed-methods research is available in the literature, so it is advisable to adhere to this to ensure methodological rigour. When to use it is best suited when the research questions require: triangulating findings from different methodologies to explain a single phenomenon; clarifying the results of one method using another method; informing the design of one method based on the findings of another method, development of a scale/questionnaire and answering different research questions within a single study. Two case studies have been presented to illustrate possible applications of mixed-methods methodology. Limitations Possessing the necessary knowledge and skills to undertake qualitative and quantitative data collection, analysis, interpretation and integration remains the biggest challenge for researchers conducting mixed-methods studies. Sequential study designs are often time consuming, being in two (or more) phases whereas concurrent study designs may require more than one data collector to collect both qualitative and quantitative data at the same time.
NASA Astrophysics Data System (ADS)
Zhang, Yunpeng; Ho, Siu-lau; Fu, Weinong
2018-05-01
This paper proposes a dynamic multi-level optimal design method for power transformer design optimization (TDO) problems. A response surface generated by second-order polynomial regression analysis is updated dynamically by adding more design points, which are selected by Shifted Hammersley Method (SHM) and calculated by finite-element method (FEM). The updating stops when the accuracy requirement is satisfied, and optimized solutions of the preliminary design are derived simultaneously. The optimal design level is modulated through changing the level of error tolerance. Based on the response surface of the preliminary design, a refined optimal design is added using multi-objective genetic algorithm (MOGA). The effectiveness of the proposed optimal design method is validated through a classic three-phase power TDO problem.
Reusable design: A proposed approach to Public Health Informatics system design
2011-01-01
Background Since it was first defined in 1995, Public Health Informatics (PHI) has become a recognized discipline, with a research agenda, defined domain-specific competencies and a specialized corpus of technical knowledge. Information systems form a cornerstone of PHI research and implementation, representing significant progress for the nascent field. However, PHI does not advocate or incorporate standard, domain-appropriate design methods for implementing public health information systems. Reusable design is generalized design advice that can be reused in a range of similar contexts. We propose that PHI create and reuse information design knowledge by taking a systems approach that incorporates design methods from the disciplines of Human-Computer Interaction, Interaction Design and other related disciplines. Discussion Although PHI operates in a domain with unique characteristics, many design problems in public health correspond to classic design problems, suggesting that existing design methods and solution approaches are applicable to the design of public health information systems. Among the numerous methodological frameworks used in other disciplines, we identify scenario-based design and participatory design as two widely-employed methodologies that are appropriate for adoption as PHI standards. We make the case that these methods show promise to create reusable design knowledge in PHI. Summary We propose the formalization of a set of standard design methods within PHI that can be used to pursue a strategy of design knowledge creation and reuse for cost-effective, interoperable public health information systems. We suggest that all public health informaticians should be able to use these design methods and the methods should be incorporated into PHI training. PMID:21333000
A New Automated Design Method Based on Machine Learning for CMOS Analog Circuits
NASA Astrophysics Data System (ADS)
Moradi, Behzad; Mirzaei, Abdolreza
2016-11-01
A new simulation based automated CMOS analog circuit design method which applies a multi-objective non-Darwinian-type evolutionary algorithm based on Learnable Evolution Model (LEM) is proposed in this article. The multi-objective property of this automated design of CMOS analog circuits is governed by a modified Strength Pareto Evolutionary Algorithm (SPEA) incorporated in the LEM algorithm presented here. LEM includes a machine learning method such as the decision trees that makes a distinction between high- and low-fitness areas in the design space. The learning process can detect the right directions of the evolution and lead to high steps in the evolution of the individuals. The learning phase shortens the evolution process and makes remarkable reduction in the number of individual evaluations. The expert designer's knowledge on circuit is applied in the design process in order to reduce the design space as well as the design time. The circuit evaluation is made by HSPICE simulator. In order to improve the design accuracy, bsim3v3 CMOS transistor model is adopted in this proposed design method. This proposed design method is tested on three different operational amplifier circuits. The performance of this proposed design method is verified by comparing it with the evolutionary strategy algorithm and other similar methods.
Modified Fully Utilized Design (MFUD) Method for Stress and Displacement Constraints
NASA Technical Reports Server (NTRS)
Patnaik, Surya; Gendy, Atef; Berke, Laszlo; Hopkins, Dale
1997-01-01
The traditional fully stressed method performs satisfactorily for stress-limited structural design. When this method is extended to include displacement limitations in addition to stress constraints, it is known as the fully utilized design (FUD). Typically, the FUD produces an overdesign, which is the primary limitation of this otherwise elegant method. We have modified FUD in an attempt to alleviate the limitation. This new method, called the modified fully utilized design (MFUD) method, has been tested successfully on a number of designs that were subjected to multiple loads and had both stress and displacement constraints. The solutions obtained with MFUD compare favorably with the optimum results that can be generated by using nonlinear mathematical programming techniques. The MFUD method appears to have alleviated the overdesign condition and offers the simplicity of a direct, fully stressed type of design method that is distinctly different from optimization and optimality criteria formulations. The MFUD method is being developed for practicing engineers who favor traditional design methods rather than methods based on advanced calculus and nonlinear mathematical programming techniques. The Integrated Force Method (IFM) was found to be the appropriate analysis tool in the development of the MFUD method. In this paper, the MFUD method and its optimality are presented along with a number of illustrative examples.
Enhanced learning through design problems - teaching a components-based course through design
NASA Astrophysics Data System (ADS)
Jensen, Bogi Bech; Högberg, Stig; Fløtum Jensen, Frida av; Mijatovic, Nenad
2012-08-01
This paper describes a teaching method used in an electrical machines course, where the students learn about electrical machines by designing them. The aim of the course is not to teach design, albeit this is a side product, but rather to teach the fundamentals and the function of electrical machines through design. The teaching method is evaluated by a student questionnaire, designed to measure the quality and effectiveness of the teaching method. The results of the questionnaire conclusively show that this method labelled 'learning through design' is a very effective way of teaching a components-based course. This teaching method can easily be generalised and used in other courses.
Tradeoff studies in multiobjective insensitive design of airplane control systems
NASA Technical Reports Server (NTRS)
Schy, A. A.; Giesy, D. P.
1983-01-01
A computer aided design method for multiobjective parameter-insensitive design of airplane control systems is described. Methods are presented for trading off nominal values of design objectives against sensitivities of the design objectives to parameter uncertainties, together with guidelines for designer utilization of the methods. The methods are illustrated by application to the design of a lateral stability augmentation system for two supersonic flight conditions of the Shuttle Orbiter. Objective functions are conventional handling quality measures and peak magnitudes of control deflections and rates. The uncertain parameters are assumed Gaussian, and numerical approximations of the stochastic behavior of the objectives are described. Results of applying the tradeoff methods to this example show that stochastic-insensitive designs are distinctly different from deterministic multiobjective designs. The main penalty for achieving significant decrease in sensitivity is decreased speed of response for the nominal system.
NASA Astrophysics Data System (ADS)
Adrich, Przemysław
2016-05-01
In Part I of this work existing methods and problems in dual foil electron beam forming system design are presented. On this basis, a new method of designing these systems is introduced. The motivation behind this work is to eliminate the shortcomings of the existing design methods and improve overall efficiency of the dual foil design process. The existing methods are based on approximate analytical models applied in an unrealistically simplified geometry. Designing a dual foil system with these methods is a rather labor intensive task as corrections to account for the effects not included in the analytical models have to be calculated separately and accounted for in an iterative procedure. To eliminate these drawbacks, the new design method is based entirely on Monte Carlo modeling in a realistic geometry and using physics models that include all relevant processes. In our approach, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of the system performance in function of parameters of the foils. The new method, while being computationally intensive, minimizes the involvement of the designer and considerably shortens the overall design time. The results are of high quality as all the relevant physics and geometry details are naturally accounted for. To demonstrate the feasibility of practical implementation of the new method, specialized software tools were developed and applied to solve a real life design problem, as described in Part II of this work.
Methodical Design of Software Architecture Using an Architecture Design Assistant (ArchE)
2005-04-01
PA 15213-3890 Methodical Design of Software Architecture Using an Architecture Design Assistant (ArchE) Felix Bachmann and Mark Klein Software...DATES COVERED 00-00-2005 to 00-00-2005 4. TITLE AND SUBTITLE Methodical Design of Software Architecture Using an Architecture Design Assistant...important for architecture design – quality requirements and constraints are most important Here’s some evidence: If the only concern is
Project Lifespan-based Nonstationary Hydrologic Design Methods for Changing Environment
NASA Astrophysics Data System (ADS)
Xiong, L.
2017-12-01
Under changing environment, we must associate design floods with the design life period of projects to ensure the hydrologic design is really relevant to the operation of the hydrologic projects, because the design value for a given exceedance probability over the project life period would be significantly different from that over other time periods of the same length due to the nonstationarity of probability distributions. Several hydrologic design methods that take the design life period of projects into account have been proposed in recent years, i.e. the expected number of exceedances (ENE), design life level (DLL), equivalent reliability (ER), and average design life level (ADLL). Among the four methods to be compared, both the ENE and ER methods are return period-based methods, while DLL and ADLL are risk/reliability- based methods which estimate design values for given probability values of risk or reliability. However, the four methods can be unified together under a general framework through a relationship transforming the so-called representative reliability (RRE) into the return period, i.e. m=1/1(1-RRE), in which we compute the return period m using the representative reliability RRE.The results of nonstationary design quantiles and associated confidence intervals calculated by ENE, ER and ADLL were very similar, since ENE or ER was a special case or had a similar expression form with respect to ADLL. In particular, the design quantiles calculated by ENE and ADLL were the same when return period was equal to the length of the design life. In addition, DLL can yield similar design values if the relationship between DLL and ER/ADLL return periods is considered. Furthermore, ENE, ER and ADLL had good adaptability to either an increasing or decreasing situation, yielding not too large or too small design quantiles. This is important for applications of nonstationary hydrologic design methods in actual practice because of the concern of choosing the emerging nonstationary methods versus the traditional stationary methods. There is still a long way to go for the conceptual transition from stationarity to nonstationarity in hydrologic design.
Merits and limitations of optimality criteria method for structural optimization
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Guptill, James D.; Berke, Laszlo
1993-01-01
The merits and limitations of the optimality criteria (OC) method for the minimum weight design of structures subjected to multiple load conditions under stress, displacement, and frequency constraints were investigated by examining several numerical examples. The examples were solved utilizing the Optimality Criteria Design Code that was developed for this purpose at NASA Lewis Research Center. This OC code incorporates OC methods available in the literature with generalizations for stress constraints, fully utilized design concepts, and hybrid methods that combine both techniques. Salient features of the code include multiple choices for Lagrange multiplier and design variable update methods, design strategies for several constraint types, variable linking, displacement and integrated force method analyzers, and analytical and numerical sensitivities. The performance of the OC method, on the basis of the examples solved, was found to be satisfactory for problems with few active constraints or with small numbers of design variables. For problems with large numbers of behavior constraints and design variables, the OC method appears to follow a subset of active constraints that can result in a heavier design. The computational efficiency of OC methods appears to be similar to some mathematical programming techniques.
NASA Technical Reports Server (NTRS)
Olds, John Robert; Walberg, Gerald D.
1993-01-01
Multidisciplinary design optimization (MDO) is an emerging discipline within aerospace engineering. Its goal is to bring structure and efficiency to the complex design process associated with advanced aerospace launch vehicles. Aerospace vehicles generally require input from a variety of traditional aerospace disciplines - aerodynamics, structures, performance, etc. As such, traditional optimization methods cannot always be applied. Several multidisciplinary techniques and methods were proposed as potentially applicable to this class of design problem. Among the candidate options are calculus-based (or gradient-based) optimization schemes and parametric schemes based on design of experiments theory. A brief overview of several applicable multidisciplinary design optimization methods is included. Methods from the calculus-based class and the parametric class are reviewed, but the research application reported focuses on methods from the parametric class. A vehicle of current interest was chosen as a test application for this research. The rocket-based combined-cycle (RBCC) single-stage-to-orbit (SSTO) launch vehicle combines elements of rocket and airbreathing propulsion in an attempt to produce an attractive option for launching medium sized payloads into low earth orbit. The RBCC SSTO presents a particularly difficult problem for traditional one-variable-at-a-time optimization methods because of the lack of an adequate experience base and the highly coupled nature of the design variables. MDO, however, with it's structured approach to design, is well suited to this problem. The result of the application of Taguchi methods, central composite designs, and response surface methods to the design optimization of the RBCC SSTO are presented. Attention is given to the aspect of Taguchi methods that attempts to locate a 'robust' design - that is, a design that is least sensitive to uncontrollable influences on the design. Near-optimum minimum dry weight solutions are determined for the vehicle. A summary and evaluation of the various parametric MDO methods employed in the research are included. Recommendations for additional research are provided.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-05
... Methods: Designation of Three New Equivalent Methods AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of three new equivalent methods for monitoring ambient air quality. SUMMARY... equivalent methods, one for measuring concentrations of PM 2.5 , one for measuring concentrations of PM 10...
ERIC Educational Resources Information Center
Honebein, Peter C.
2017-01-01
An instructional designer's values about instructional methods can be a curse or a cure. On one hand, a designer's love affair for a method may cause them to use that method in situations that are not appropriate. On the other hand, that same love affair may inspire a designer to fight for a method when those in power are willing to settle for a…
ERIC Educational Resources Information Center
Sinharay, Sandip; Holland, Paul W.
2008-01-01
The nonequivalent groups with anchor test (NEAT) design involves missing data that are missing by design. Three popular equating methods that can be used with a NEAT design are the poststratification equating method, the chain equipercentile equating method, and the item-response-theory observed-score-equating method. These three methods each…
Study of Fuze Structure and Reliability Design Based on the Direct Search Method
NASA Astrophysics Data System (ADS)
Lin, Zhang; Ning, Wang
2017-03-01
Redundant design is one of the important methods to improve the reliability of the system, but mutual coupling of multiple factors is often involved in the design. In my study, Direct Search Method is introduced into the optimum redundancy configuration for design optimization, in which, the reliability, cost, structural weight and other factors can be taken into account simultaneously, and the redundant allocation and reliability design of aircraft critical system are computed. The results show that this method is convenient and workable, and applicable to the redundancy configurations and optimization of various designs upon appropriate modifications. And this method has a good practical value.
Demystifying Mixed Methods Research Design: A Review of the Literature
ERIC Educational Resources Information Center
Caruth, Gail D.
2013-01-01
Mixed methods research evolved in response to the observed limitations of both quantitative and qualitative designs and is a more complex method. The purpose of this paper was to examine mixed methods research in an attempt to demystify the design thereby allowing those less familiar with its design an opportunity to utilize it in future research.…
Yamada, Akira; Terakawa, Mitsuhiro
2015-04-10
We present a design method of a bull's eye structure with asymmetric grooves for focusing oblique incident light. The design method is capable of designing transmission peaks to a desired oblique angle with capability of collecting light from a wider range of angles. The bull's eye groove geometry for oblique incidence is designed based on the electric field intensity pattern around an isolated subwavelength aperture on a thin gold film at oblique incidence, calculated by the finite difference time domain method. Wide angular transmission efficiency is successfully achieved by overlapping two different bull's eye groove patterns designed with different peak angles. Our novel design method would overcome the angular limitations of the conventional methods.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-11
... Methods: Designation of a New Equivalent Method AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of a new equivalent method for monitoring ambient air quality. SUMMARY: Notice is... part 53, a new equivalent method for measuring concentrations of PM 2.5 in the ambient air. FOR FURTHER...
Applications of numerical optimization methods to helicopter design problems: A survey
NASA Technical Reports Server (NTRS)
Miura, H.
1984-01-01
A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.
Applications of numerical optimization methods to helicopter design problems - A survey
NASA Technical Reports Server (NTRS)
Miura, H.
1985-01-01
A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.
Applications of numerical optimization methods to helicopter design problems - A survey
NASA Technical Reports Server (NTRS)
Miura, H.
1984-01-01
A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.
Radiofrequency pulse design using nonlinear gradient magnetic fields.
Kopanoglu, Emre; Constable, R Todd
2015-09-01
An iterative k-space trajectory and radiofrequency (RF) pulse design method is proposed for excitation using nonlinear gradient magnetic fields. The spatial encoding functions (SEFs) generated by nonlinear gradient fields are linearly dependent in Cartesian coordinates. Left uncorrected, this may lead to flip angle variations in excitation profiles. In the proposed method, SEFs (k-space samples) are selected using a matching pursuit algorithm, and the RF pulse is designed using a conjugate gradient algorithm. Three variants of the proposed approach are given: the full algorithm, a computationally cheaper version, and a third version for designing spoke-based trajectories. The method is demonstrated for various target excitation profiles using simulations and phantom experiments. The method is compared with other iterative (matching pursuit and conjugate gradient) and noniterative (coordinate-transformation and Jacobian-based) pulse design methods as well as uniform density spiral and EPI trajectories. The results show that the proposed method can increase excitation fidelity. An iterative method for designing k-space trajectories and RF pulses using nonlinear gradient fields is proposed. The method can either be used for selecting the SEFs individually to guide trajectory design, or can be adapted to design and optimize specific trajectories of interest. © 2014 Wiley Periodicals, Inc.
Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys
Hund, Lauren; Bedrick, Edward J.; Pagano, Marcello
2015-01-01
Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis. PMID:26125967
Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.
Hund, Lauren; Bedrick, Edward J; Pagano, Marcello
2015-01-01
Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.
POLLUTION PREVENTION IN THE EARLY STAGES OF HIERARCHICAL PROCESS DESIGN
Hierarchical methods are often used in the conceptual stages of process design to synthesize and evaluate process alternatives. In this work, the methods of hierarchical process design will be focused on environmental aspects. In particular, the design methods will be coupled to ...
Shape design sensitivity analysis and optimal design of structural systems
NASA Technical Reports Server (NTRS)
Choi, Kyung K.
1987-01-01
The material derivative concept of continuum mechanics and an adjoint variable method of design sensitivity analysis are used to relate variations in structural shape to measures of structural performance. A domain method of shape design sensitivity analysis is used to best utilize the basic character of the finite element method that gives accurate information not on the boundary but in the domain. Implementation of shape design sensitivty analysis using finite element computer codes is discussed. Recent numerical results are used to demonstrate the accuracy obtainable using the method. Result of design sensitivity analysis is used to carry out design optimization of a built-up structure.
NASA Astrophysics Data System (ADS)
Takizawa, Kenji; Kondo, Keiichiro
A hybrid railway traction system with fuel cells (FCs) and electric double layer-capacitors (EDLCs) is discussed in this paper. This system can save FC costs and absorb the regenerative energy. A method for designing FCs and EDLCs on the basis of the output power and capacitance, respectively, has not been reported, even though their design is one of the most important technical issues encountered in the design of hybrid railway vehicles. Such design method is presented along with a train load profile and an energy management strategy. The design results obtained using the proposed method are verified by performing numerical simulations of a running train. These results reveal that the proposed method for designing the EDLCs and FCs on the basis of the capacitance and power, respectively, and by using a method for controlling the EDLC voltage is sufficiently effective in designing efficient EDLCs and FCs of hybrid railway traction systems.
RF Pulse Design using Nonlinear Gradient Magnetic Fields
Kopanoglu, Emre; Constable, R. Todd
2014-01-01
Purpose An iterative k-space trajectory and radio-frequency (RF) pulse design method is proposed for Excitation using Nonlinear Gradient Magnetic fields (ENiGMa). Theory and Methods The spatial encoding functions (SEFs) generated by nonlinear gradient fields (NLGFs) are linearly dependent in Cartesian-coordinates. Left uncorrected, this may lead to flip-angle variations in excitation profiles. In the proposed method, SEFs (k-space samples) are selected using a Matching-Pursuit algorithm, and the RF pulse is designed using a Conjugate-Gradient algorithm. Three variants of the proposed approach are given: the full-algorithm, a computationally-cheaper version, and a third version for designing spoke-based trajectories. The method is demonstrated for various target excitation profiles using simulations and phantom experiments. Results The method is compared to other iterative (Matching-Pursuit and Conjugate Gradient) and non-iterative (coordinate-transformation and Jacobian-based) pulse design methods as well as uniform density spiral and EPI trajectories. The results show that the proposed method can increase excitation fidelity significantly. Conclusion An iterative method for designing k-space trajectories and RF pulses using nonlinear gradient fields is proposed. The method can either be used for selecting the SEFs individually to guide trajectory design, or can be adapted to design and optimize specific trajectories of interest. PMID:25203286
Innovative design method of automobile profile based on Fourier descriptor
NASA Astrophysics Data System (ADS)
Gao, Shuyong; Fu, Chaoxing; Xia, Fan; Shen, Wei
2017-10-01
Aiming at the innovation of the contours of automobile side, this paper presents an innovative design method of vehicle side profile based on Fourier descriptor. The design flow of this design method is: pre-processing, coordinate extraction, standardization, discrete Fourier transform, simplified Fourier descriptor, exchange descriptor innovation, inverse Fourier transform to get the outline of innovative design. Innovative concepts of the innovative methods of gene exchange among species and the innovative methods of gene exchange among different species are presented, and the contours of the innovative design are obtained separately. A three-dimensional model of a car is obtained by referring to the profile curve which is obtained by exchanging xenogeneic genes. The feasibility of the method proposed in this paper is verified by various aspects.
NASA Astrophysics Data System (ADS)
Wang, Nianfeng; Guo, Hao; Chen, Bicheng; Cui, Chaoyu; Zhang, Xianmin
2018-05-01
Dielectric elastomers (DE), known as electromechanical transducers, have been widely used in the field of sensors, generators, actuators and energy harvesting for decades. A large number of DE actuators including bending actuators, linear actuators and rotational actuators have been designed utilizing an experience design method. This paper proposes a new method for the design of DE actuators by using a topology optimization method based on pairs of curves. First, theoretical modeling and optimization design are discussed, after which a rotary dielectric elastomer actuator has been designed using this optimization method. Finally, experiments and comparisons between several DE actuators have been made to verify the optimized result.
Launch Vehicle Design and Optimization Methods and Priority for the Advanced Engineering Environment
NASA Technical Reports Server (NTRS)
Rowell, Lawrence F.; Korte, John J.
2003-01-01
NASA's Advanced Engineering Environment (AEE) is a research and development program that will improve collaboration among design engineers for launch vehicle conceptual design and provide the infrastructure (methods and framework) necessary to enable that environment. In this paper, three major technical challenges facing the AEE program are identified, and three specific design problems are selected to demonstrate how advanced methods can improve current design activities. References are made to studies that demonstrate these design problems and methods, and these studies will provide the detailed information and check cases to support incorporation of these methods into the AEE. This paper provides background and terminology for discussing the launch vehicle conceptual design problem so that the diverse AEE user community can participate in prioritizing the AEE development effort.
Bishop, Felicity L
2015-02-01
To outline some of the challenges of mixed methods research and illustrate how they can be addressed in health psychology research. This study critically reflects on the author's previously published mixed methods research and discusses the philosophical and technical challenges of mixed methods, grounding the discussion in a brief review of methodological literature. Mixed methods research is characterized as having philosophical and technical challenges; the former can be addressed by drawing on pragmatism, the latter by considering formal mixed methods research designs proposed in a number of design typologies. There are important differences among the design typologies which provide diverse examples of designs that health psychologists can adapt for their own mixed methods research. There are also similarities; in particular, many typologies explicitly orient to the technical challenges of deciding on the respective timing of qualitative and quantitative methods and the relative emphasis placed on each method. Characteristics, strengths, and limitations of different sequential and concurrent designs are identified by reviewing five mixed methods projects each conducted for a different purpose. Adapting formal mixed methods designs can help health psychologists address the technical challenges of mixed methods research and identify the approach that best fits the research questions and purpose. This does not obfuscate the need to address philosophical challenges of mixing qualitative and quantitative methods. Statement of contribution What is already known on this subject? Mixed methods research poses philosophical and technical challenges. Pragmatism in a popular approach to the philosophical challenges while diverse typologies of mixed methods designs can help address the technical challenges. Examples of mixed methods research can be hard to locate when component studies from mixed methods projects are published separately. What does this study add? Critical reflections on the author's previously published mixed methods research illustrate how a range of different mixed methods designs can be adapted and applied to address health psychology research questions. The philosophical and technical challenges of mixed methods research should be considered together and in relation to the broader purpose of the research. © 2014 The British Psychological Society.
Efficient data communication protocols for wireless networks
NASA Astrophysics Data System (ADS)
Zeydan, Engin
In this dissertation, efficient decentralized algorithms are investigated for cost minimization problems in wireless networks. For wireless sensor networks, we investigate both the reduction in the energy consumption and throughput maximization problems separately using multi-hop data aggregation for correlated data in wireless sensor networks. The proposed algorithms exploit data redundancy using a game theoretic framework. For energy minimization, routes are chosen to minimize the total energy expended by the network using best response dynamics to local data. The cost function used in routing takes into account distance, interference and in-network data aggregation. The proposed energy-efficient correlation-aware routing algorithm significantly reduces the energy consumption in the network and converges in a finite number of steps iteratively. For throughput maximization, we consider both the interference distribution across the network and correlation between forwarded data when establishing routes. Nodes along each route are chosen to minimize the interference impact in their neighborhood and to maximize the in-network data aggregation. The resulting network topology maximizes the global network throughput and the algorithm is guaranteed to converge with a finite number of steps using best response dynamics. For multiple antenna wireless ad-hoc networks, we present distributed cooperative and regret-matching based learning schemes for joint transmit beanformer and power level selection problem for nodes operating in multi-user interference environment. Total network transmit power is minimized while ensuring a constant received signal-to-interference and noise ratio at each receiver. In cooperative and regret-matching based power minimization algorithms, transmit beanformers are selected from a predefined codebook to minimize the total power. By selecting transmit beamformers judiciously and performing power adaptation, the cooperative algorithm is shown to converge to pure strategy Nash equilibrium with high probability throughout the iterations in the interference impaired network. On the other hand, the regret-matching learning algorithm is noncooperative and requires minimum amount of overhead. The proposed cooperative and regret-matching based distributed algorithms are also compared with centralized solutions through simulation results.
Wavelet subband coding of computer simulation output using the A++ array class library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, J.N.; Brislawn, C.M.; Quinlan, D.J.
1995-07-01
The goal of the project is to produce utility software for off-line compression of existing data and library code that can be called from a simulation program for on-line compression of data dumps as the simulation proceeds. Naturally, we would like the amount of CPU time required by the compression algorithm to be small in comparison to the requirements of typical simulation codes. We also want the algorithm to accomodate a wide variety of smooth, multidimensional data types. For these reasons, the subband vector quantization (VQ) approach employed in has been replaced by a scalar quantization (SQ) strategy using amore » bank of almost-uniform scalar subband quantizers in a scheme similar to that used in the FBI fingerprint image compression standard. This eliminates the considerable computational burdens of training VQ codebooks for each new type of data and performing nearest-vector searches to encode the data. The comparison of subband VQ and SQ algorithms in indicated that, in practice, there is relatively little additional gain from using vector as opposed to scalar quantization on DWT subbands, even when the source imagery is from a very homogeneous population, and our subjective experience with synthetic computer-generated data supports this stance. It appears that a careful study is needed of the tradeoffs involved in selecting scalar vs. vector subband quantization, but such an analysis is beyond the scope of this paper. Our present work is focused on the problem of generating wavelet transform/scalar quantization (WSQ) implementations that can be ported easily between different hardware environments. This is an extremely important consideration given the great profusion of different high-performance computing architectures available, the high cost associated with learning how to map algorithms effectively onto a new architecture, and the rapid rate of evolution in the world of high-performance computing.« less
Hazelton, Patrick T; Steward, Wayne T; Collins, Shane P; Gaffney, Stuart; Morin, Stephen F; Arnold, Emily A
2014-01-01
In preparation for full Affordable Care Act implementation, California has instituted two healthcare initiatives that provide comprehensive coverage for previously uninsured or underinsured individuals. For many people living with HIV, this has required transition either from the HIV-specific coverage of the Ryan White program to the more comprehensive coverage provided by the county-run Low-Income Health Programs or from Medicaid fee-for-service to Medicaid managed care. Patient advocates have expressed concern that these transitions may present implementation challenges that will need to be addressed if ambitious HIV prevention and treatment goals are to be achieved. 30 semi-structured, in-depth interviews were conducted between October, 2012, and February, 2013, with policymakers and providers in 10 urban, suburban, and rural California counties. Interview topics included: continuity of patient care, capacity to handle payer source transitions, and preparations for healthcare reform implementation. Study team members reviewed interview transcripts to produce emergent themes, develop a codebook, build inter-rater reliability, and conduct analyses. Respondents supported the goals of the ACA, but reported clinic and policy-level challenges to maintaining patient continuity of care during the payer source transitions. They also identified strategies for addressing these challenges. Areas of focus included: gaps in communication to reach patients and develop partnerships between providers and policymakers, perceived inadequacy in new provider networks for delivering quality HIV care, the potential for clinics to become financially insolvent due to lower reimbursement rates, and increased administrative burdens for clinic staff and patients. California's new healthcare initiatives represent ambitious attempts to expand and improve health coverage for low-income individuals. The state's challenges in maintaining quality care and treatment for people living with HIV experiencing these transitions demonstrate the importance of setting effective policies in anticipation of full ACA implementation in 2014.
Klieger, Sarah B; Gutman, Abraham; Allen, Leslie; Pacula, Rosalie Liccardo; Ibrahim, Jennifer K; Burris, Scott
2017-12-01
(1) To describe open source legal data sets, created for research use, that capture the key provisions of US state medical marijuana laws. The data document how state lawmakers have regulated a medicine that remains, under federal law, a Schedule I illegal drug with no legitimate medical use. (2) To demonstrate the variability that exists across states in rules governing patient access, product safety and dispensary practice. Two legal researchers collected and coded state laws governing marijuana patients, product safety and dispensaries in effect on 1 February 2017, creating three empirical legal data sets. We used summary tables to identify the variation in specific statutory provisions specified in each state's medical marijuana law as it existed on 1 February 2017. We compared aspects of these laws to the traditional Federal approach to regulating medicine. Full data sets, codebooks and protocols are available through the Prescription Drug Abuse Policy System (http://www.pdaps.org/; Archived at http://www.webcitation.org/6qv5CZNaZ on 2 June 2017). Twenty-eight states (including the District of Columbia) have authorized medical marijuana. Twenty-seven specify qualifying diseases, which differ across states. All states protect patient privacy; only 14 protect patients against discrimination. Eighteen states have mandatory product safety testing before any sale. While the majority have package/label regulations, states have a wide range of specific requirements. Most regulate dispensaries (25 states), with considerable variation in specific provisions such as permitted product supply sources number of dispensaries per state and restricting proximity to various types of location. The federal ban in the United States on marijuana has resulted in a patchwork of regulatory strategies that are not uniformly consistent with the approach usually taken by the Federal government and whose effectiveness remains unknown. © 2017 Society for the Study of Addiction.
Fernandez, Melissa Anne; Desroches, Sophie; Turcotte, Mylène; Marquis, Marie; Dufour, Joëlle; Provencher, Véronique
2016-08-30
The Eat Well Campaign (EWC) was a social marketing campaign developed by Health Canada and disseminated to the public with the help of cross-sector partners. The purpose of this study was to describe factors that influenced cross-sector partners' decision to adopt the EWC. Thematic content analysis, based primarily on an a priori codebook of constructs from Roger's diffusion of innovations decision process model, was conducted on hour-long semi-structured telephone interviews with Health Canada's cross-sector partners (n = 18). Dominant themes influencing cross-sector partners' decision to adopt the EWC were: high compatibility with the organization's values; being associated with Health Canada; and low perceived complexity of activities. Several adopters indicated that social norms (e.g., knowing that other organizations in their network were involved in the collaboration) played a strong role in their decision to participate, particularly for food retailers and small organizations. The opportunity itself to work in partnership with Health Canada and other organizations was seen as a prominent relative advantage by many organizations. Adopters were characterized as having high social participation and positive attitudes towards health, new ideas and Health Canada. The lack of exposure to the mass media channels used to diffuse the campaign and reserved attitudes towards Health Canada were prominent obstacles identified by a minority of health organizations, which challenged the decision to adopt the EWC. Most other barriers were considered as minor challenges and did not appear to impede the adoption process. Understanding factors that influence cross-sector adoption of nutrition initiatives can help decision makers target the most appropriate partners to advance public health objectives. Government health agencies are likely to find strong partners in organizations that share the same values as the initiative, have positive attitudes towards health, are extremely implicated in social causes and value the notion of partnership.
Chung, Christina; Fischer, Leah S; OʼConnor, Angelica; Shultz, Alvin
CDC's Epidemiology and Laboratory Capacity for Infectious Diseases (ELC) Cooperative Agreement aims to help health departments strengthen core epidemiology capacity needed to respond to a variety of emerging infectious diseases. In fiscal year 2014, $6 million was awarded to 41 health departments for flexible epidemiologists (FEs). FEs were intended to help meet health departments' unique needs and support unanticipated events that could require the diversion of resources to specific emerging or reemerging diseases. Explore multiple perspectives to characterize how FEs are utilized and to understand the perceived value of this strategy from the health department perspective. We conducted 14 in-depth interviews using a semistructured questionnaire with a heterogeneous sample of 8 state health departments; 2 different instruments were administered to ELC principal investigators (PIs) or supervisors, and FEs. The team produced a codebook consisting of both structural and data-driven codes to prepare for a thematic analysis of the data. Three major patterns emerged to describe how FEs are being used in health departments; most commonly, FEs were used to support priorities and gaps across a range of infectious diseases, with an emphasis on enteric diseases. Almost all of the health departments utilized FEs to assist in investigating and responding to outbreaks, maintaining and upgrading surveillance systems, and coordinating and collaborating with partners. Both PIs and supervisors highly valued the flexibility it offered to their programs because FEs were cross-trained and could be used to help with situations where additional staff members were needed. ELC enhances epidemiology capacity in health departments by providing flexible personnel that help sustain areas with losses in capacity, addressing programmatic gaps, and supporting unanticipated events. Our findings support the notion that flexible personnel could be an effective model for strengthening epidemiology capacity among health departments. Our findings have practical implications for addressing the overall decline in the public health workforce, as well as the current context and environment of public health funding at both state and federal levels.
Chandler, Jennifer A; Sun, Jeffrey A; Racine, Eric
2017-01-01
Recently, the news media have reported on the discovery of covert awareness and the establishment of limited communication using a functional magnetic resonance imaging (fMRI) neuroimaging technique with several brain-injured patients thought to have been in a vegetative state. This discovery has raised many ethical, legal, and social questions related to quality of life, end-of-life decision making, diagnostic and prognostic accuracy in disorders of consciousness, resource allocation, and other issues. This project inquires into the public responses to these discoveries. We conducted a thematic analysis of online comments (n = 779) posted in response to 15 news articles and blog posts regarding the case of a Canadian patient diagnosed for 12 years as in a vegetative state, but who was reported in 2012 as having been able to communicate via fMRI. The online comments were coded using an iteratively refined codebook structured around 14 main themes. Among the most frequent public reactions revealed in the online comments were discussions of the quality of life of patients with disorders of consciousness, whether life-sustaining treatment should be withdrawn (and whether the fMRI communication technique should be used to ask patients about this), and misgivings about the accuracy of diagnosis in disorders of consciousness and brain death. These public perspectives are relevant to the obligations of clinicians, lawyers, and public policymakers to patients, families, and the public. Future work should consider how best to alleviate families' concerns as this type of research shakes their faith in diagnostic accuracy, to clarify the legal rules relating to advance directives in this context, and to address the manner in which public messaging might help to alleviate any indirect impact on confidence in the organ donation system.
"Fitspiration" on Social Media: A Content Analysis of Gendered Images.
Carrotte, Elise Rose; Prichard, Ivanka; Lim, Megan Su Cheng
2017-03-29
"Fitspiration" (also known as "fitspo") aims to inspire individuals to exercise and be healthy, but emerging research indicates exposure can negatively impact female body image. Fitspiration is frequently accessed on social media; however, it is currently unclear the degree to which messages about body image and exercise differ by gender of the subject. The aim of our study was to conduct a content analysis to identify the characteristics of fitspiration content posted across social media and whether this differs according to subject gender. Content tagged with #fitspo across Instagram, Facebook, Twitter, and Tumblr was extracted over a composite 30-minute period. All posts were analyzed by 2 independent coders according to a codebook. Of the 415/476 (87.2%) relevant posts extracted, most posts were on Instagram (360/415, 86.8%). Most posts (308/415, 74.2%) related thematically to exercise, and 81/415 (19.6%) related thematically to food. In total, 151 (36.4%) posts depicted only female subjects and 114/415 (27.5%) depicted only male subjects. Female subjects were typically thin but toned; male subjects were often muscular or hypermuscular. Within the images, female subjects were significantly more likely to be aged under 25 years (P<.001) than the male subjects, to have their full body visible (P=.001), and to have their buttocks emphasized (P<.001). Male subjects were more likely to have their face visible in the post (P=.005) than the female subjects. Female subjects were more likely to be sexualized than the male subjects (P=.002). Female #fitspo subjects typically adhered to the thin or athletic ideal, and male subjects typically adhered to the muscular ideal. Future research and interventional efforts should consider the potential objectifying messages in fitspiration, as it relates to both female and male body image. ©Elise Rose Carrotte, Ivanka Prichard, Megan Su Cheng Lim. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 29.03.2017.
The research progress on Hodograph Method of aerodynamic design at Tsinghua University
NASA Technical Reports Server (NTRS)
Chen, Zuoyi; Guo, Jingrong
1991-01-01
Progress in the use of the Hodograph method of aerodynamic design is discussed. It was found that there are some restricted conditions in the application of Hodograph design to transonic turbine and compressor cascades. The Hodograph method is suitable not only to the transonic turbine cascade but also to the transonic compressor cascade. The three dimensional Hodograph method will be developed after obtaining the basic equation for the three dimensional Hodograph method. As an example of the Hodograph method, the use of the method to design a transonic turbine and compressor cascade is discussed.
Prevalence of Mixed-Methods Sampling Designs in Social Science Research
ERIC Educational Resources Information Center
Collins, Kathleen M. T.
2006-01-01
The purpose of this mixed-methods study was to document the prevalence of sampling designs utilised in mixed-methods research and to examine the interpretive consistency between interpretations made in mixed-methods studies and the sampling design used. Classification of studies was based on a two-dimensional mixed-methods sampling model. This…
NASA Technical Reports Server (NTRS)
Freed, Alan D.
1996-01-01
There are many aspects to consider when designing a Rosenbrock-Wanner-Wolfbrandt (ROW) method for the numerical integration of ordinary differential equations (ODE's) solving initial value problems (IVP's). The process can be simplified by constructing ROW methods around good Runge-Kutta (RK) methods. The formulation of a new, simple, embedded, third-order, ROW method demonstrates this design approach.
Wu, Rengmao; Hua, Hong; Benítez, Pablo; Miñano, Juan C.; Liang, Rongguang
2016-01-01
The energy efficiency and compactness of an illumination system are two main concerns in illumination design for extended sources. In this paper, we present two methods to design compact, ultra efficient aspherical lenses for extended Lambertian sources in two-dimensional geometry. The light rays are directed by using two aspherical surfaces in the first method and one aspherical surface along with an optimized parabola in the second method. The principles and procedures of each design method are introduced in detail. Three examples are presented to demonstrate the effectiveness of these two methods in terms of performance and capacity in designing compact, ultra efficient aspherical lenses. The comparisons made between the two proposed methods indicate that the second method is much simpler and easier to be implemented, and has an excellent extensibility to three-dimensional designs. PMID:29092336
A new design approach to MMI-based (de)multiplexers
NASA Astrophysics Data System (ADS)
Yueyu, Xiao; Sailing, He
2004-09-01
A novel design method of the wavelength (de)multiplexer is presented. The output spectral response of a (de)multiplexer is designed from the view of FIR filters. Avoiding laborious mathematic analysis, the (de)multiplexer is analyzed and designed in this explicit and simple method. A four channel (de)multiplexer based on multimode interference (MMI) is designed as an example. The result obtained agrees with that of the commonly used method, and is verified by a finite difference beam propagation method (FDBPM) simulation.
Issues and Strategies in Solving Multidisciplinary Optimization Problems
NASA Technical Reports Server (NTRS)
Patnaik, Surya
2013-01-01
Optimization research at NASA Glenn Research Center has addressed the design of structures, aircraft and airbreathing propulsion engines. The accumulated multidisciplinary design activity is collected under a testbed entitled COMETBOARDS. Several issues were encountered during the solution of the problems. Four issues and the strategies adapted for their resolution are discussed. This is followed by a discussion on analytical methods that is limited to structural design application. An optimization process can lead to an inefficient local solution. This deficiency was encountered during design of an engine component. The limitation was overcome through an augmentation of animation into optimization. Optimum solutions obtained were infeasible for aircraft and airbreathing propulsion engine problems. Alleviation of this deficiency required a cascading of multiple algorithms. Profile optimization of a beam produced an irregular shape. Engineering intuition restored the regular shape for the beam. The solution obtained for a cylindrical shell by a subproblem strategy converged to a design that can be difficult to manufacture. Resolution of this issue remains a challenge. The issues and resolutions are illustrated through a set of problems: Design of an engine component, Synthesis of a subsonic aircraft, Operation optimization of a supersonic engine, Design of a wave-rotor-topping device, Profile optimization of a cantilever beam, and Design of a cylindrical shell. This chapter provides a cursory account of the issues. Cited references provide detailed discussion on the topics. Design of a structure can also be generated by traditional method and the stochastic design concept. Merits and limitations of the three methods (traditional method, optimization method and stochastic concept) are illustrated. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions can be produced by all the three methods. The variation in the weight calculated by the methods was found to be modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliability traced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.
Permanent Ground Anchors : Stump Design Criteria
DOT National Transportation Integrated Search
1982-09-01
This document summarizes the main design methods used by the principal investigators in the design of permanent ground anchors, including basic concepts, design criteria, and analytical techniques. The application of these design methods are illustra...
Software Design Methods for Real-Time Systems
1989-12-01
This module describes the concepts and methods used in the software design of real time systems . It outlines the characteristics of real time systems , describes...the role of software design in real time system development, surveys and compares some software design methods for real - time systems , and
Multiobjective Optimization of Rocket Engine Pumps Using Evolutionary Algorithm
NASA Technical Reports Server (NTRS)
Oyama, Akira; Liou, Meng-Sing
2001-01-01
A design optimization method for turbopumps of cryogenic rocket engines has been developed. Multiobjective Evolutionary Algorithm (MOEA) is used for multiobjective pump design optimizations. Performances of design candidates are evaluated by using the meanline pump flow modeling method based on the Euler turbine equation coupled with empirical correlations for rotor efficiency. To demonstrate the feasibility of the present approach, a single stage centrifugal pump design and multistage pump design optimizations are presented. In both cases, the present method obtains very reasonable Pareto-optimal solutions that include some designs outperforming the original design in total head while reducing input power by one percent. Detailed observation of the design results also reveals some important design criteria for turbopumps in cryogenic rocket engines. These results demonstrate the feasibility of the EA-based design optimization method in this field.
Green design assessment of electromechanical products based on group weighted-AHP
NASA Astrophysics Data System (ADS)
Guo, Jinwei; Zhou, MengChu; Li, Zhiwu; Xie, Huiguang
2015-11-01
Manufacturing industry is the backbone of a country's economy while environmental pollution is a serious problem that human beings must face today. The green design of electromechanical products based on enterprise information systems is an important method to solve the environmental problem. The question on how to design green products must be answered by excellent designers via both advanced design methods and effective assessment methods of electromechanical products. Making an objective and precise assessment of green design is one of the problems that must be solved when green design is conducted. An assessment method of green design on electromechanical products based on Group Weighted-AHP (Analytic Hierarchy Process) is proposed in this paper, together with the characteristics of green products. The assessment steps of green design are also established. The results are illustrated via the assessment of a refrigerator design.
General method for designing wave shape transformers.
Ma, Hua; Qu, Shaobo; Xu, Zhuo; Wang, Jiafu
2008-12-22
An effective method for designing wave shape transformers (WSTs) is investigated by adopting the coordinate transformation theory. Following this method, the devices employed to transform electromagnetic (EM) wave fronts from one style with arbitrary shape and size to another style, can be designed. To verify this method, three examples in 2D spaces are also presented. Compared with the methods proposed in other literatures, this method offers the general procedure in designing WSTs, and thus is of great importance for the potential and practical applications possessed by such kinds of devices.
Shao, Jing-Yuan; Qu, Hai-Bin; Gong, Xing-Chu
2018-05-01
In this work, two algorithms (overlapping method and the probability-based method) for design space calculation were compared by using the data collected from extraction process of Codonopsis Radix as an example. In the probability-based method, experimental error was simulated to calculate the probability of reaching the standard. The effects of several parameters on the calculated design space were studied, including simulation number, step length, and the acceptable probability threshold. For the extraction process of Codonopsis Radix, 10 000 times of simulation and 0.02 for the calculation step length can lead to a satisfactory design space. In general, the overlapping method is easy to understand, and can be realized by several kinds of commercial software without coding programs, but the reliability of the process evaluation indexes when operating in the design space is not indicated. Probability-based method is complex in calculation, but can provide the reliability to ensure that the process indexes can reach the standard within the acceptable probability threshold. In addition, there is no probability mutation in the edge of design space by probability-based method. Therefore, probability-based method is recommended for design space calculation. Copyright© by the Chinese Pharmaceutical Association.
NASA Astrophysics Data System (ADS)
Li, Leihong
A modular structural design methodology for composite blades is developed. This design method can be used to design composite rotor blades with sophisticate geometric cross-sections. This design method hierarchically decomposed the highly-coupled interdisciplinary rotor analysis into global and local levels. In the global level, aeroelastic response analysis and rotor trim are conduced based on multi-body dynamic models. In the local level, variational asymptotic beam sectional analysis methods are used for the equivalent one-dimensional beam properties. Compared with traditional design methodology, the proposed method is more efficient and accurate. Then, the proposed method is used to study three different design problems that have not been investigated before. The first is to add manufacturing constraints into design optimization. The introduction of manufacturing constraints complicates the optimization process. However, the design with manufacturing constraints benefits the manufacturing process and reduces the risk of violating major performance constraints. Next, a new design procedure for structural design against fatigue failure is proposed. This procedure combines the fatigue analysis with the optimization process. The durability or fatigue analysis employs a strength-based model. The design is subject to stiffness, frequency, and durability constraints. Finally, the manufacturing uncertainty impacts on rotor blade aeroelastic behavior are investigated, and a probabilistic design method is proposed to control the impacts of uncertainty on blade structural performance. The uncertainty factors include dimensions, shapes, material properties, and service loads.
Controlling lightwave in Riemann space by merging geometrical optics with transformation optics.
Liu, Yichao; Sun, Fei; He, Sailing
2018-01-11
In geometrical optical design, we only need to choose a suitable combination of lenses, prims, and mirrors to design an optical path. It is a simple and classic method for engineers. However, people cannot design fantastical optical devices such as invisibility cloaks, optical wormholes, etc. by geometrical optics. Transformation optics has paved the way for these complicated designs. However, controlling the propagation of light by transformation optics is not a direct design process like geometrical optics. In this study, a novel mixed method for optical design is proposed which has both the simplicity of classic geometrical optics and the flexibility of transformation optics. This mixed method overcomes the limitations of classic optical design; at the same time, it gives intuitive guidance for optical design by transformation optics. Three novel optical devices with fantastic functions have been designed using this mixed method, including asymmetrical transmissions, bidirectional focusing, and bidirectional cloaking. These optical devices cannot be implemented by classic optics alone and are also too complicated to be designed by pure transformation optics. Numerical simulations based on both the ray tracing method and full-wave simulation method are carried out to verify the performance of these three optical devices.
Bourgault, Patricia; Gallagher, Frances; Michaud, Cécile; Saint-Cyr-Tribble, Denise
2010-12-01
The use of a mixed method research design raises many questions, especially regarding the paradigmatic position. With this paradigm, we may consider the mixed method design as the best way of answering a research question and the latter orients to one of the different subtypes of mixed method design. To illustrate the use of this kind of design, we propose a study such as conducted in nursing sciences. In this article, the challenges raised by the mixed method design, and the place of this type of research in nursing sciences is discussed.
Paturzo, Marco; Colaceci, Sofia; Clari, Marco; Mottola, Antonella; Alvaro, Rosaria; Vellone, Ercole
2016-01-01
. Mixed methods designs: an innovative methodological approach for nursing research. The mixed method research designs (MM) combine qualitative and quantitative approaches in the research process, in a single study or series of studies. Their use can provide a wider understanding of multifaceted phenomena. This article presents a general overview of the structure and design of MM to spread this approach in the Italian nursing research community. The MM designs most commonly used in the nursing field are the convergent parallel design, the sequential explanatory design, the exploratory sequential design and the embedded design. For each method a research example is presented. The use of MM can be an added value to improve clinical practices as, through the integration of qualitative and quantitative methods, researchers can better assess complex phenomena typical of nursing.
NASA Technical Reports Server (NTRS)
Capo, M. A.; Disney, R. K.; Jordan, T. A.; Soltesz, R. G.; Woodsum, H. C.
1969-01-01
Eight computer programs make up a nine volume synthesis containing two design methods for nuclear rocket radiation shields. The first design method is appropriate for parametric and preliminary studies, while the second accomplishes the verification of a final nuclear rocket reactor design.
Controller design via structural reduced modeling by FETM
NASA Technical Reports Server (NTRS)
Yousuff, Ajmal
1987-01-01
The Finite Element-Transfer Matrix (FETM) method has been developed to reduce the computations involved in analysis of structures. This widely accepted method, however, has certain limitations, and does not address the issues of control design. To overcome these, a modification of the FETM method has been developed. The new method easily produces reduced models tailored toward subsequent control design. Other features of this method are its ability to: (1) extract open loop frequencies and mode shapes with less computations, (2) overcome limitations of the original FETM method, and (3) simplify the design procedures for output feedback, constrained compensation, and decentralized control. This report presents the development of the new method, generation of reduced models by this method, their properties, and the role of these reduced models in control design. Examples are included to illustrate the methodology.
Stiffness Parameter Design of Suspension Element of Under-Chassis-Equipment for A Rail Vehicle
NASA Astrophysics Data System (ADS)
Ma, Menglin; Wang, Chengqiang; Deng, Hai
2017-06-01
According to the frequency configuration requirements of the vibration of railway under-chassis-equipment, the three- dimension stiffness of the suspension elements of under-chassis-equipment is designed based on the static principle and dynamics principle. The design results of the concrete engineering case show that, compared with the design method based on the static principle, the three- dimension stiffness of the suspension elements designed by the dynamic principle design method is more uniform. The frequency and decoupling degree analysis show that the calculation frequency of under-chassis-equipment under the two design methods is basically the same as the predetermined frequency. Compared with the design method based on the static principle, the design method based on the dynamic principle is adopted. The decoupling degree can be kept high, and the coupling vibration of the corresponding vibration mode can be reduced effectively, which can effectively reduce the fatigue damage of the key parts of the hanging element.
The equivalent magnetizing method applied to the design of gradient coils for MRI.
Lopez, Hector Sanchez; Liu, Feng; Crozier, Stuart
2008-01-01
This paper presents a new method for the design of gradient coils for Magnetic Resonance Imaging systems. The method is based on the equivalence between a magnetized volume surrounded by a conducting surface and its equivalent representation in surface current/charge density. We demonstrate that the curl of the vertical magnetization induces a surface current density whose stream line defines the coil current pattern. This method can be applied for coils wounds on arbitrary surface shapes. A single layer unshielded transverse gradient coil is designed and compared, with the designs obtained using two conventional methods. Through the presented example we demonstrate that the generated unconventional current patterns obtained using the magnetizing current method produces a superior gradient coil performance than coils designed by applying conventional methods.
Robust design of microchannel cooler
NASA Astrophysics Data System (ADS)
He, Ye; Yang, Tao; Hu, Li; Li, Leimin
2005-12-01
Microchannel cooler has offered a new method for the cooling of high power diode lasers, with the advantages of small volume, high efficiency of thermal dissipation and low cost when mass-produced. In order to reduce the sensitivity of design to manufacture errors or other disturbances, Taguchi method that is one of robust design method was chosen to optimize three parameters important to the cooling performance of roof-like microchannel cooler. The hydromechanical and thermal mathematical model of varying section microchannel was calculated using finite volume method by FLUENT. A special program was written to realize the automation of the design process for improving efficiency. The optimal design is presented which compromises between optimal cooling performance and its robustness. This design method proves to be available.
In silico methods for design of biological therapeutics.
Roy, Ankit; Nair, Sanjana; Sen, Neeladri; Soni, Neelesh; Madhusudhan, M S
2017-12-01
It has been twenty years since the first rationally designed small molecule drug was introduced into the market. Since then, we have progressed from designing small molecules to designing biotherapeutics. This class of therapeutics includes designed proteins, peptides and nucleic acids that could more effectively combat drug resistance and even act in cases where the disease is caused because of a molecular deficiency. Computational methods are crucial in this design exercise and this review discusses the various elements of designing biotherapeutic proteins and peptides. Many of the techniques discussed here, such as the deterministic and stochastic design methods, are generally used in protein design. We have devoted special attention to the design of antibodies and vaccines. In addition to the methods for designing these molecules, we have included a comprehensive list of all biotherapeutics approved for clinical use. Also included is an overview of methods that predict the binding affinity, cell penetration ability, half-life, solubility, immunogenicity and toxicity of the designed therapeutics. Biotherapeutics are only going to grow in clinical importance and are set to herald a new generation of disease management and cure. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Interactive design optimization of magnetorheological-brake actuators using the Taguchi method
NASA Astrophysics Data System (ADS)
Erol, Ozan; Gurocak, Hakan
2011-10-01
This research explored an optimization method that would automate the process of designing a magnetorheological (MR)-brake but still keep the designer in the loop. MR-brakes apply resistive torque by increasing the viscosity of an MR fluid inside the brake. This electronically controllable brake can provide a very large torque-to-volume ratio, which is very desirable for an actuator. However, the design process is quite complex and time consuming due to many parameters. In this paper, we adapted the popular Taguchi method, widely used in manufacturing, to the problem of designing a complex MR-brake. Unlike other existing methods, this approach can automatically identify the dominant parameters of the design, which reduces the search space and the time it takes to find the best possible design. While automating the search for a solution, it also lets the designer see the dominant parameters and make choices to investigate only their interactions with the design output. The new method was applied for re-designing MR-brakes. It reduced the design time from a week or two down to a few minutes. Also, usability experiments indicated significantly better brake designs by novice users.
Mistry, Pankaj; Dunn, Janet A; Marshall, Andrea
2017-07-18
The application of adaptive design methodology within a clinical trial setting is becoming increasingly popular. However the application of these methods within trials is not being reported as adaptive designs hence making it more difficult to capture the emerging use of these designs. Within this review, we aim to understand how adaptive design methodology is being reported, whether these methods are explicitly stated as an 'adaptive design' or if it has to be inferred and to identify whether these methods are applied prospectively or concurrently. Three databases; Embase, Ovid and PubMed were chosen to conduct the literature search. The inclusion criteria for the review were phase II, phase III and phase II/III randomised controlled trials within the field of Oncology that published trial results in 2015. A variety of search terms related to adaptive designs were used. A total of 734 results were identified, after screening 54 were eligible. Adaptive designs were more commonly applied in phase III confirmatory trials. The majority of the papers performed an interim analysis, which included some sort of stopping criteria. Additionally only two papers explicitly stated the term 'adaptive design' and therefore for most of the papers, it had to be inferred that adaptive methods was applied. Sixty-five applications of adaptive design methods were applied, from which the most common method was an adaptation using group sequential methods. This review indicated that the reporting of adaptive design methodology within clinical trials needs improving. The proposed extension to the current CONSORT 2010 guidelines could help capture adaptive design methods. Furthermore provide an essential aid to those involved with clinical trials.
Comparing Methods for Dynamic Airspace Configuration
NASA Technical Reports Server (NTRS)
Zelinski, Shannon; Lai, Chok Fung
2011-01-01
This paper compares airspace design solutions for dynamically reconfiguring airspace in response to nominal daily traffic volume fluctuation. Airspace designs from seven algorithmic methods and a representation of current day operations in Kansas City Center were simulated with two times today's demand traffic. A three-configuration scenario was used to represent current day operations. Algorithms used projected unimpeded flight tracks to design initial 24-hour plans to switch between three configurations at predetermined reconfiguration times. At each reconfiguration time, algorithms used updated projected flight tracks to update the subsequent planned configurations. Compared to the baseline, most airspace design methods reduced delay and increased reconfiguration complexity, with similar traffic pattern complexity results. Design updates enabled several methods to as much as half the delay from their original designs. Freeform design methods reduced delay and increased reconfiguration complexity the most.
2016-11-01
Display Design, Methods , and Results for a User Study by Christopher J Garneau and Robert F Erbacher Approved for public...NOV 2016 US Army Research Laboratory Evaluation of Visualization Tools for Computer Network Defense Analysts: Display Design, Methods ...January 2013–September 2015 4. TITLE AND SUBTITLE Evaluation of Visualization Tools for Computer Network Defense Analysts: Display Design, Methods
Designs of Empirical Evaluations of Nonexperimental Methods in Field Settings.
Wong, Vivian C; Steiner, Peter M
2018-01-01
Over the last three decades, a research design has emerged to evaluate the performance of nonexperimental (NE) designs and design features in field settings. It is called the within-study comparison (WSC) approach or the design replication study. In the traditional WSC design, treatment effects from a randomized experiment are compared to those produced by an NE approach that shares the same target population. The nonexperiment may be a quasi-experimental design, such as a regression-discontinuity or an interrupted time-series design, or an observational study approach that includes matching methods, standard regression adjustments, and difference-in-differences methods. The goals of the WSC are to determine whether the nonexperiment can replicate results from a randomized experiment (which provides the causal benchmark estimate), and the contexts and conditions under which these methods work in practice. This article presents a coherent theory of the design and implementation of WSCs for evaluating NE methods. It introduces and identifies the multiple purposes of WSCs, required design components, common threats to validity, design variants, and causal estimands of interest in WSCs. It highlights two general approaches for empirical evaluations of methods in field settings, WSC designs with independent and dependent benchmark and NE arms. This article highlights advantages and disadvantages for each approach, and conditions and contexts under which each approach is optimal for addressing methodological questions.
NASA Astrophysics Data System (ADS)
Hanan, Lu; Qiushi, Li; Shaobin, Li
2016-12-01
This paper presents an integrated optimization design method in which uniform design, response surface methodology and genetic algorithm are used in combination. In detail, uniform design is used to select the experimental sampling points in the experimental domain and the system performance is evaluated by means of computational fluid dynamics to construct a database. After that, response surface methodology is employed to generate a surrogate mathematical model relating the optimization objective and the design variables. Subsequently, genetic algorithm is adopted and applied to the surrogate model to acquire the optimal solution in the case of satisfying some constraints. The method has been applied to the optimization design of an axisymmetric diverging duct, dealing with three design variables including one qualitative variable and two quantitative variables. The method of modeling and optimization design performs well in improving the duct aerodynamic performance and can be also applied to wider fields of mechanical design and seen as a useful tool for engineering designers, by reducing the design time and computation consumption.
Freeform object design and simultaneous manufacturing
NASA Astrophysics Data System (ADS)
Zhang, Wei; Zhang, Weihan; Lin, Heng; Leu, Ming C.
2003-04-01
Today's product design, especially the consuming product design, focuses more and more on individuation, originality, and the time to market. One way to meet these challenges is using the interactive and creationary product design methods and rapid prototyping/rapid tooling. This paper presents a novel Freeform Object Design and Simultaneous Manufacturing (FODSM) method that combines the natural interaction feature in the design phase and simultaneous manufacturing feature in the prototyping phase. The natural interactive three-dimensional design environment is achieved by adopting virtual reality technology. The geometry of the designed object is defined through the process of "virtual sculpting" during which the designer can touch and visualize the designed object and can hear the virtual manufacturing environment noise. During the designing process, the computer records the sculpting trajectories and automatically translates them into NC codes so as to simultaneously machine the designed part. The paper introduced the principle, implementation process, and key techniques of the new method, and compared it with other popular rapid prototyping methods.
Monks, K; Molnár, I; Rieger, H-J; Bogáti, B; Szabó, E
2012-04-06
Robust HPLC separations lead to fewer analysis failures and better method transfer as well as providing an assurance of quality. This work presents the systematic development of an optimal, robust, fast UHPLC method for the simultaneous assay of two APIs of an eye drop sample and their impurities, in accordance with Quality by Design principles. Chromatography software is employed to effectively generate design spaces (Method Operable Design Regions), which are subsequently employed to determine the final method conditions and to evaluate robustness prior to validation. Copyright © 2011 Elsevier B.V. All rights reserved.
Optimal cure cycle design of a resin-fiber composite laminate
NASA Technical Reports Server (NTRS)
Hou, Jean W.; Sheen, Jeenson
1987-01-01
A unified computed aided design method was studied for the cure cycle design that incorporates an optimal design technique with the analytical model of a composite cure process. The preliminary results of using this proposed method for optimal cure cycle design are reported and discussed. The cure process of interest is the compression molding of a polyester which is described by a diffusion reaction system. The finite element method is employed to convert the initial boundary value problem into a set of first order differential equations which are solved simultaneously by the DE program. The equations for thermal design sensitivities are derived by using the direct differentiation method and are solved by the DE program. A recursive quadratic programming algorithm with an active set strategy called a linearization method is used to optimally design the cure cycle, subjected to the given design performance requirements. The difficulty of casting the cure cycle design process into a proper mathematical form is recognized. Various optimal design problems are formulated to address theses aspects. The optimal solutions of these formulations are compared and discussed.
A knowledge-based design framework for airplane conceptual and preliminary design
NASA Astrophysics Data System (ADS)
Anemaat, Wilhelmus A. J.
The goal of work described herein is to develop the second generation of Advanced Aircraft Analysis (AAA) into an object-oriented structure which can be used in different environments. One such environment is the third generation of AAA with its own user interface, the other environment with the same AAA methods (i.e. the knowledge) is the AAA-AML program. AAA-AML automates the initial airplane design process using current AAA methods in combination with AMRaven methodologies for dependency tracking and knowledge management, using the TechnoSoft Adaptive Modeling Language (AML). This will lead to the following benefits: (1) Reduced design time: computer aided design methods can reduce design and development time and replace tedious hand calculations. (2) Better product through improved design: more alternative designs can be evaluated in the same time span, which can lead to improved quality. (3) Reduced design cost: due to less training and less calculation errors substantial savings in design time and related cost can be obtained. (4) Improved Efficiency: the design engineer can avoid technically correct but irrelevant calculations on incomplete or out of sync information, particularly if the process enables robust geometry earlier. Although numerous advancements in knowledge based design have been developed for detailed design, currently no such integrated knowledge based conceptual and preliminary airplane design system exists. The third generation AAA methods are tested over a ten year period on many different airplane designs. Using AAA methods will demonstrate significant time savings. The AAA-AML system will be exercised and tested using 27 existing airplanes ranging from single engine propeller, business jets, airliners, UAV's to fighters. Data for the varied sizing methods will be compared with AAA results, to validate these methods. One new design, a Light Sport Aircraft (LSA), will be developed as an exercise to use the tool for designing a new airplane. Using these tools will show an improvement in efficiency over using separate programs due to the automatic recalculation with any change of input data. The direct visual feedback of 3D geometry in the AAA-AML, will lead to quicker resolving of problems as opposed to conventional methods.
NASA Astrophysics Data System (ADS)
Koval, Viacheslav
The seismic design provisions of the CSA-S6 Canadian Highway Bridge Design Code and the AASHTO LRFD Seismic Bridge Design Specifications have been developed primarily based on historical earthquake events that have occurred along the west coast of North America. For the design of seismic isolation systems, these codes include simplified analysis and design methods. The appropriateness and range of application of these methods are investigated through extensive parametric nonlinear time history analyses in this thesis. It was found that there is a need to adjust existing design guidelines to better capture the expected nonlinear response of isolated bridges. For isolated bridges located in eastern North America, new damping coefficients are proposed. The applicability limits of the code-based simplified methods have been redefined to ensure that the modified method will lead to conservative results and that a wider range of seismically isolated bridges can be covered by this method. The possibility of further improving current simplified code methods was also examined. By transforming the quantity of allocated energy into a displacement contribution, an idealized analytical solution is proposed as a new simplified design method. This method realistically reflects the effects of ground-motion and system design parameters, including the effects of a drifted oscillation center. The proposed method is therefore more appropriate than current existing simplified methods and can be applicable to isolation systems exhibiting a wider range of properties. A multi-level-hazard performance matrix has been adopted by different seismic provisions worldwide and will be incorporated into the new edition of the Canadian CSA-S6-14 Bridge Design code. However, the combined effect and optimal use of isolation and supplemental damping devices in bridges have not been fully exploited yet to achieve enhanced performance under different levels of seismic hazard. A novel Dual-Level Seismic Protection (DLSP) concept is proposed and developed in this thesis which permits to achieve optimum seismic performance with combined isolation and supplemental damping devices in bridges. This concept is shown to represent an attractive design approach for both the upgrade of existing seismically deficient bridges and the design of new isolated bridges.
Analytical quality by design: a tool for regulatory flexibility and robust analytics.
Peraman, Ramalingam; Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy
2015-01-01
Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT).
Analytical Quality by Design: A Tool for Regulatory Flexibility and Robust Analytics
Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy
2015-01-01
Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT). PMID:25722723
Investigating the Use of Design Methods by Capstone Design Students at Clemson University
ERIC Educational Resources Information Center
Miller, W. Stuart; Summers, Joshua D.
2013-01-01
The authors describe a preliminary study to understand the attitude of engineering students regarding the use of design methods in projects to identify the factors either affecting or influencing the use of these methods by novice engineers. A senior undergraduate capstone design course at Clemson University, consisting of approximately fifty…
Iterative optimization method for design of quantitative magnetization transfer imaging experiments.
Levesque, Ives R; Sled, John G; Pike, G Bruce
2011-09-01
Quantitative magnetization transfer imaging (QMTI) using spoiled gradient echo sequences with pulsed off-resonance saturation can be a time-consuming technique. A method is presented for selection of an optimum experimental design for quantitative magnetization transfer imaging based on the iterative reduction of a discrete sampling of the Z-spectrum. The applicability of the technique is demonstrated for human brain white matter imaging at 1.5 T and 3 T, and optimal designs are produced to target specific model parameters. The optimal number of measurements and the signal-to-noise ratio required for stable parameter estimation are also investigated. In vivo imaging results demonstrate that this optimal design approach substantially improves parameter map quality. The iterative method presented here provides an advantage over free form optimal design methods, in that pragmatic design constraints are readily incorporated. In particular, the presented method avoids clustering and repeated measures in the final experimental design, an attractive feature for the purpose of magnetization transfer model validation. The iterative optimal design technique is general and can be applied to any method of quantitative magnetization transfer imaging. Copyright © 2011 Wiley-Liss, Inc.
A direct approach to the design of linear multivariable systems
NASA Technical Reports Server (NTRS)
Agrawal, B. L.
1974-01-01
Design of multivariable systems is considered and design procedures are formulated in the light of the most recent work on model matching. The word model matching is used exclusively to mean matching the input-output behavior of two systems. The term is used in the frequency domain to indicate the comparison of two transfer matrices containing transfer functions as elements. Design methods where non-interaction is not used as a criteria were studied. Two design methods are considered. The first method of design is based solely upon the specification of generalized error coefficients for each individual transfer function of the overall system transfer matrix. The second design method is called the pole fixing method because all the system poles are fixed at preassigned positions. The zeros of terms either above or below the diagonal are partially fixed via steady state error coefficients. The advantages and disadvantages of each method are discussed and an example is worked to demonstrate their uses. The special cases of triangular decoupling and minimum constraints are discussed.
Stochastic Methods for Aircraft Design
NASA Technical Reports Server (NTRS)
Pelz, Richard B.; Ogot, Madara
1998-01-01
The global stochastic optimization method, simulated annealing (SA), was adapted and applied to various problems in aircraft design. The research was aimed at overcoming the problem of finding an optimal design in a space with multiple minima and roughness ubiquitous to numerically generated nonlinear objective functions. SA was modified to reduce the number of objective function evaluations for an optimal design, historically the main criticism of stochastic methods. SA was applied to many CFD/MDO problems including: low sonic-boom bodies, minimum drag on supersonic fore-bodies, minimum drag on supersonic aeroelastic fore-bodies, minimum drag on HSCT aeroelastic wings, FLOPS preliminary design code, another preliminary aircraft design study with vortex lattice aerodynamics, HSR complete aircraft aerodynamics. In every case, SA provided a simple, robust and reliable optimization method which found optimal designs in order 100 objective function evaluations. Perhaps most importantly, from this academic/industrial project, technology has been successfully transferred; this method is the method of choice for optimization problems at Northrop Grumman.
Designs and methods used in published Australian health promotion evaluations 1992-2011.
Chambers, Alana Hulme; Murphy, Kylie; Kolbe, Anthony
2015-06-01
To describe the designs and methods used in published Australian health promotion evaluation articles between 1992 and 2011. Using a content analysis approach, we reviewed 157 articles to analyse patterns and trends in designs and methods in Australian health promotion evaluation articles. The purpose was to provide empirical evidence about the types of designs and methods used. The most common type of evaluation conducted was impact evaluation. Quantitative designs were used exclusively in more than half of the articles analysed. Almost half the evaluations utilised only one data collection method. Surveys were the most common data collection method used. Few articles referred explicitly to an intended evaluation outcome or benefit and references to published evaluation models or frameworks were rare. This is the first time Australian-published health promotion evaluation articles have been empirically investigated in relation to designs and methods. There appears to be little change in the purposes, overall designs and methods of published evaluations since 1992. More methodologically transparent and sophisticated published evaluation articles might be instructional, and even motivational, for improving evaluation practice and result in better public health interventions and outcomes. © 2015 Public Health Association of Australia.
HUDSON, PARISA; HUDSON, STEPHEN D.; HANDLER, WILLIAM B.; SCHOLL, TIMOTHY J.; CHRONIK, BLAINE A.
2010-01-01
High-performance shim coils are required for high-field magnetic resonance imaging and spectroscopy. Complete sets of high-power and high-performance shim coils were designed using two different methods: the minimum inductance and the minimum power target field methods. A quantitative comparison of shim performance in terms of merit of inductance (ML) and merit of resistance (MR) was made for shim coils designed using the minimum inductance and the minimum power design algorithms. In each design case, the difference in ML and the difference in MR given by the two design methods was <15%. Comparison of wire patterns obtained using the two design algorithms show that minimum inductance designs tend to feature oscillations within the current density; while minimum power designs tend to feature less rapidly varying current densities and lower power dissipation. Overall, the differences in coil performance obtained by the two methods are relatively small. For the specific case of shim systems customized for small animal imaging, the reduced power dissipation obtained when using the minimum power method is judged to be more significant than the improvements in switching speed obtained from the minimum inductance method. PMID:20411157
A modified Finite Element-Transfer Matrix for control design of space structures
NASA Technical Reports Server (NTRS)
Tan, T.-M.; Yousuff, A.; Bahar, L. Y.; Konstandinidis, M.
1990-01-01
The Finite Element-Transfer Matrix (FETM) method was developed for reducing the computational efforts involved in structural analysis. While being widely used by structural analysts, this method does, however, have certain limitations, particularly when used for the control design of large flexible structures. In this paper, a new formulation based on the FETM method is presented. The new method effectively overcomes the limitations in the original FETM method, and also allows an easy construction of reduced models that are tailored for the control design. Other advantages of this new method include the ability to extract open loop frequencies and mode shapes with less computation, and simplification of the design procedures for output feedback, constrained compensation, and decentralized control. The development of this new method and the procedures for generating reduced models using this method are described in detail and the role of the reduced models in control design is discussed through an illustrative example.
Single-Case Designs and Qualitative Methods: Applying a Mixed Methods Research Perspective
ERIC Educational Resources Information Center
Hitchcock, John H.; Nastasi, Bonnie K.; Summerville, Meredith
2010-01-01
The purpose of this conceptual paper is to describe a design that mixes single-case (sometimes referred to as single-subject) and qualitative methods, hereafter referred to as a single-case mixed methods design (SCD-MM). Minimal attention has been given to the topic of applying qualitative methods to SCD work in the literature. These two…
Research design and statistical methods in Pakistan Journal of Medical Sciences (PJMS).
Akhtar, Sohail; Shah, Syed Wadood Ali; Rafiq, M; Khan, Ajmal
2016-01-01
This article compares the study design and statistical methods used in 2005, 2010 and 2015 of Pakistan Journal of Medical Sciences (PJMS). Only original articles of PJMS were considered for the analysis. The articles were carefully reviewed for statistical methods and designs, and then recorded accordingly. The frequency of each statistical method and research design was estimated and compared with previous years. A total of 429 articles were evaluated (n=74 in 2005, n=179 in 2010, n=176 in 2015) in which 171 (40%) were cross-sectional and 116 (27%) were prospective study designs. A verity of statistical methods were found in the analysis. The most frequent methods include: descriptive statistics (n=315, 73.4%), chi-square/Fisher's exact tests (n=205, 47.8%) and student t-test (n=186, 43.4%). There was a significant increase in the use of statistical methods over time period: t-test, chi-square/Fisher's exact test, logistic regression, epidemiological statistics, and non-parametric tests. This study shows that a diverse variety of statistical methods have been used in the research articles of PJMS and frequency improved from 2005 to 2015. However, descriptive statistics was the most frequent method of statistical analysis in the published articles while cross-sectional study design was common study design.
Research on Visualization Design Method in the Field of New Media Software Engineering
NASA Astrophysics Data System (ADS)
Deqiang, Hu
2018-03-01
In the new period of increasingly developed science and technology, with the increasingly fierce competition in the market and the increasing demand of the masses, new design and application methods have emerged in the field of new media software engineering, that is, the visualization design method. Applying the visualization design method to the field of new media software engineering can not only improve the actual operation efficiency of new media software engineering but more importantly the quality of software development can be enhanced by means of certain media of communication and transformation; on this basis, the progress and development of new media software engineering in China are also continuously promoted. Therefore, the application of visualization design method in the field of new media software engineering is analysed concretely in this article from the perspective of the overview of visualization design methods and on the basis of systematic analysis of the basic technology.
Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.
2002-01-01
Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.
Trends in study design and the statistical methods employed in a leading general medicine journal.
Gosho, M; Sato, Y; Nagashima, K; Takahashi, S
2018-02-01
Study design and statistical methods have become core components of medical research, and the methodology has become more multifaceted and complicated over time. The study of the comprehensive details and current trends of study design and statistical methods is required to support the future implementation of well-planned clinical studies providing information about evidence-based medicine. Our purpose was to illustrate study design and statistical methods employed in recent medical literature. This was an extension study of Sato et al. (N Engl J Med 2017; 376: 1086-1087), which reviewed 238 articles published in 2015 in the New England Journal of Medicine (NEJM) and briefly summarized the statistical methods employed in NEJM. Using the same database, we performed a new investigation of the detailed trends in study design and individual statistical methods that were not reported in the Sato study. Due to the CONSORT statement, prespecification and justification of sample size are obligatory in planning intervention studies. Although standard survival methods (eg Kaplan-Meier estimator and Cox regression model) were most frequently applied, the Gray test and Fine-Gray proportional hazard model for considering competing risks were sometimes used for a more valid statistical inference. With respect to handling missing data, model-based methods, which are valid for missing-at-random data, were more frequently used than single imputation methods. These methods are not recommended as a primary analysis, but they have been applied in many clinical trials. Group sequential design with interim analyses was one of the standard designs, and novel design, such as adaptive dose selection and sample size re-estimation, was sometimes employed in NEJM. Model-based approaches for handling missing data should replace single imputation methods for primary analysis in the light of the information found in some publications. Use of adaptive design with interim analyses is increasing after the presentation of the FDA guidance for adaptive design. © 2017 John Wiley & Sons Ltd.
Shim, Jongmyeong; Park, Changsu; Lee, Jinhyung; Kang, Shinill
2016-08-08
Recently, studies have examined techniques for modeling the light distribution of light-emitting diodes (LEDs) for various applications owing to their low power consumption, longevity, and light weight. The energy mapping technique, a design method that matches the energy distributions of an LED light source and target area, has been the focus of active research because of its design efficiency and accuracy. However, these studies have not considered the effects of the emitting area of the LED source. Therefore, there are limitations to the design accuracy for small, high-power applications with a short distance between the light source and optical system. A design method for compensating for the light distribution of an extended source after the initial optics design based on a point source was proposed to overcome such limits, but its time-consuming process and limited design accuracy with multiple iterations raised the need for a new design method that considers an extended source in the initial design stage. This study proposed a method for designing discrete planar optics that controls the light distribution and minimizes the optical loss with an extended source and verified the proposed method experimentally. First, the extended source was modeled theoretically, and a design method for discrete planar optics with the optimum groove angle through energy mapping was proposed. To verify the design method, design for the discrete planar optics was achieved for applications in illumination for LED flash. In addition, discrete planar optics for LED illuminance were designed and fabricated to create a uniform illuminance distribution. Optical characterization of these structures showed that the design was optimal; i.e., we plotted the optical losses as a function of the groove angle, and found a clear minimum. Simulations and measurements showed that an efficient optical design was achieved for an extended source.
How to Construct a Mixed Methods Research Design.
Schoonenboom, Judith; Johnson, R Burke
2017-01-01
This article provides researchers with knowledge of how to design a high quality mixed methods research study. To design a mixed study, researchers must understand and carefully consider each of the dimensions of mixed methods design, and always keep an eye on the issue of validity. We explain the seven major design dimensions: purpose, theoretical drive, timing (simultaneity and dependency), point of integration, typological versus interactive design approaches, planned versus emergent design, and design complexity. There also are multiple secondary dimensions that need to be considered during the design process. We explain ten secondary dimensions of design to be considered for each research study. We also provide two case studies showing how the mixed designs were constructed.
Review of design optimization methods for turbomachinery aerodynamics
NASA Astrophysics Data System (ADS)
Li, Zhihui; Zheng, Xinqian
2017-08-01
In today's competitive environment, new turbomachinery designs need to be not only more efficient, quieter, and ;greener; but also need to be developed at on much shorter time scales and at lower costs. A number of advanced optimization strategies have been developed to achieve these requirements. This paper reviews recent progress in turbomachinery design optimization to solve real-world aerodynamic problems, especially for compressors and turbines. This review covers the following topics that are important for optimizing turbomachinery designs. (1) optimization methods, (2) stochastic optimization combined with blade parameterization methods and the design of experiment methods, (3) gradient-based optimization methods for compressors and turbines and (4) data mining techniques for Pareto Fronts. We also present our own insights regarding the current research trends and the future optimization of turbomachinery designs.
Development of direct-inverse 3-D methods for applied transonic aerodynamic wing design and analysis
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1989-01-01
An inverse wing design method was developed around an existing transonic wing analysis code. The original analysis code, TAWFIVE, has as its core the numerical potential flow solver, FLO30, developed by Jameson and Caughey. Features of the analysis code include a finite-volume formulation; wing and fuselage fitted, curvilinear grid mesh; and a viscous boundary layer correction that also accounts for viscous wake thickness and curvature. The development of the inverse methods as an extension of previous methods existing for design in Cartesian coordinates is presented. Results are shown for inviscid wing design cases in super-critical flow regimes. The test cases selected also demonstrate the versatility of the design method in designing an entire wing or discontinuous sections of a wing.
Multi-Reader ROC studies with Split-Plot Designs: A Comparison of Statistical Methods
Obuchowski, Nancy A.; Gallas, Brandon D.; Hillis, Stephen L.
2012-01-01
Rationale and Objectives Multi-reader imaging trials often use a factorial design, where study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of the design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper we compare three methods of analysis for the split-plot design. Materials and Methods Three statistical methods are presented: Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean ANOVA approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power and confidence interval coverage of the three test statistics. Results The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% CIs fall close to the nominal coverage for small and large sample sizes. Conclusions The split-plot MRMC study design can be statistically efficient compared with the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rate, similar power, and nominal CI coverage, are available for this study design. PMID:23122570
Research design and statistical methods in Pakistan Journal of Medical Sciences (PJMS)
Akhtar, Sohail; Shah, Syed Wadood Ali; Rafiq, M.; Khan, Ajmal
2016-01-01
Objective: This article compares the study design and statistical methods used in 2005, 2010 and 2015 of Pakistan Journal of Medical Sciences (PJMS). Methods: Only original articles of PJMS were considered for the analysis. The articles were carefully reviewed for statistical methods and designs, and then recorded accordingly. The frequency of each statistical method and research design was estimated and compared with previous years. Results: A total of 429 articles were evaluated (n=74 in 2005, n=179 in 2010, n=176 in 2015) in which 171 (40%) were cross-sectional and 116 (27%) were prospective study designs. A verity of statistical methods were found in the analysis. The most frequent methods include: descriptive statistics (n=315, 73.4%), chi-square/Fisher’s exact tests (n=205, 47.8%) and student t-test (n=186, 43.4%). There was a significant increase in the use of statistical methods over time period: t-test, chi-square/Fisher’s exact test, logistic regression, epidemiological statistics, and non-parametric tests. Conclusion: This study shows that a diverse variety of statistical methods have been used in the research articles of PJMS and frequency improved from 2005 to 2015. However, descriptive statistics was the most frequent method of statistical analysis in the published articles while cross-sectional study design was common study design. PMID:27022365
NASA Astrophysics Data System (ADS)
Pirmoradi, Zhila; Haji Hajikolaei, Kambiz; Wang, G. Gary
2015-10-01
Product family design is cost-efficient for achieving the best trade-off between commonalization and diversification. However, for computationally intensive design functions which are viewed as black boxes, the family design would be challenging. A two-stage platform configuration method with generalized commonality is proposed for a scale-based family with unknown platform configuration. Unconventional sensitivity analysis and information on variation in the individual variants' optimal design are used for platform configuration design. Metamodelling is employed to provide the sensitivity and variable correlation information, leading to significant savings in function calls. A family of universal electric motors is designed for product performance and the efficiency of this method is studied. The impact of the employed parameters is also analysed. Then, the proposed method is modified for obtaining higher commonality. The proposed method is shown to yield design solutions with better objective function values, allowable performance loss and higher commonality than the previously developed methods in the literature.
Design and Analysis Tools for Supersonic Inlets
NASA Technical Reports Server (NTRS)
Slater, John W.; Folk, Thomas C.
2009-01-01
Computational tools are being developed for the design and analysis of supersonic inlets. The objective is to update existing tools and provide design and low-order aerodynamic analysis capability for advanced inlet concepts. The Inlet Tools effort includes aspects of creating an electronic database of inlet design information, a document describing inlet design and analysis methods, a geometry model for describing the shape of inlets, and computer tools that implement the geometry model and methods. The geometry model has a set of basic inlet shapes that include pitot, two-dimensional, axisymmetric, and stream-traced inlet shapes. The inlet model divides the inlet flow field into parts that facilitate the design and analysis methods. The inlet geometry model constructs the inlet surfaces through the generation and transformation of planar entities based on key inlet design factors. Future efforts will focus on developing the inlet geometry model, the inlet design and analysis methods, a Fortran 95 code to implement the model and methods. Other computational platforms, such as Java, will also be explored.
Space Radiation Transport Methods Development
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.
2002-01-01
Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard Finite Element Method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 milliseconds and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of reconfigurable computing and could be utilized in the final design as verification of the deterministic method optimized design.
A Proposed Model of Retransformed Qualitative Data within a Mixed Methods Research Design
ERIC Educational Resources Information Center
Palladino, John M.
2009-01-01
Most models of mixed methods research design provide equal emphasis of qualitative and quantitative data analyses and interpretation. Other models stress one method more than the other. The present article is a discourse about the investigator's decision to employ a mixed method design to examine special education teachers' advocacy and…
Integrating Software-Architecture-Centric Methods into the Rational Unified Process
2004-07-01
Architecture Design ...................................................................................... 19...QAW in a life- cycle context. One issue that needs to be addressed is how scenarios produced in a QAW can be used by a software architecture design method...implementation testing. 18 CMU/SEI-2004-TR-011 CMU/SEI-2004-TR-011 19 4 Architecture Design The Attribute-Driven Design (ADD) method
Using mixed methods effectively in prevention science: designs, procedures, and examples.
Zhang, Wanqing; Watanabe-Galloway, Shinobu
2014-10-01
There is growing interest in using a combination of quantitative and qualitative methods to generate evidence about the effectiveness of health prevention, services, and intervention programs. With the emerging importance of mixed methods research across the social and health sciences, there has been an increased recognition of the value of using mixed methods for addressing research questions in different disciplines. We illustrate the mixed methods approach in prevention research, showing design procedures used in several published research articles. In this paper, we focused on two commonly used mixed methods designs: concurrent and sequential mixed methods designs. We discuss the types of mixed methods designs, the reasons for, and advantages of using a particular type of design, and the procedures of qualitative and quantitative data collection and integration. The studies reviewed in this paper show that the essence of qualitative research is to explore complex dynamic phenomena in prevention science, and the advantage of using mixed methods is that quantitative data can yield generalizable results and qualitative data can provide extensive insights. However, the emphasis of methodological rigor in a mixed methods application also requires considerable expertise in both qualitative and quantitative methods. Besides the necessary skills and effective interdisciplinary collaboration, this combined approach also requires an open-mindedness and reflection from the involved researchers.
Computer Graphics-aided systems analysis: application to well completion design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Detamore, J.E.; Sarma, M.P.
1985-03-01
The development of an engineering tool (in the form of a computer model) for solving design and analysis problems related with oil and gas well production operations is discussed. The development of the method is based on integrating the concepts of ''Systems Analysis'' with the techniques of ''Computer Graphics''. The concepts behind the method are very general in nature. This paper, however, illustrates the application of the method in solving gas well completion design problems. The use of the method will save time and improve the efficiency of such design and analysis problems. The method can be extended to othermore » design and analysis aspects of oil and gas wells.« less
[Optimum design of imaging spectrometer based on toroidal uniform-line-spaced (TULS) spectrometer].
Xue, Qing-Sheng; Wang, Shu-Rong
2013-05-01
Based on the geometrical aberration theory, a optimum-design method for designing an imaging spectrometer based on toroidal uniform grating spectrometer is proposed. To obtain the best optical parameters, twice optimization is carried out using genetic algorithm(GA) and optical design software ZEMAX A far-ultraviolet(FUV) imaging spectrometer is designed using this method. The working waveband is 110-180 nm, the slit size is 50 microm x 5 mm, and the numerical aperture is 0.1. Using ZEMAX software, the design result is analyzed and evaluated. The results indicate that the MTF for different wavelengths is higher than 0.7 at Nyquist frequency 10 lp x mm(-1), and the RMS spot radius is less than 14 microm. The good imaging quality is achieved over the whole working waveband, the design requirements of spatial resolution 0.5 mrad and spectral resolution 0.6 nm are satisfied. It is certificated that the optimum-design method proposed in this paper is feasible. This method can be applied in other waveband, and is an instruction method for designing grating-dispersion imaging spectrometers.
Using Aerospace Technology To Design Orthopedic Implants
NASA Technical Reports Server (NTRS)
Saravanos, D. A.; Mraz, P. J.; Davy, D. T.
1996-01-01
Technology originally developed to optimize designs of composite-material aerospace structural components used to develop method for optimizing designs of orthopedic implants. Development effort focused on designing knee implants, long-term goal to develop method for optimizing designs of orthopedic implants in general.
Mechanistic flexible pavement overlay design program : tech summary.
DOT National Transportation Integrated Search
2009-07-01
The Louisiana Department of Transportation and Development (LADOTD) currently follows the 1993 : AASHTO pavement design guides component analysis method in its fl exible pavement overlay thickness : design. Such an overlay design method, how...
Method of transition from 3D model to its ontological representation in aircraft design process
NASA Astrophysics Data System (ADS)
Govorkov, A. S.; Zhilyaev, A. S.; Fokin, I. V.
2018-05-01
This paper proposes the method of transition from a 3D model to its ontological representation and describes its usage in the aircraft design process. The problems of design for manufacturability and design automation are also discussed. The introduced method is to aim to ease the process of data exchange between important aircraft design phases, namely engineering and design control. The method is also intended to increase design speed and 3D model customizability. This requires careful selection of the complex systems (CAD / CAM / CAE / PDM), providing the basis for the integration of design and technological preparation of production and more fully take into account the characteristics of products and processes for their manufacture. It is important to solve this problem, as investment in the automation define the company's competitiveness in the years ahead.
Methods for sample size determination in cluster randomized trials
Rutterford, Clare; Copas, Andrew; Eldridge, Sandra
2015-01-01
Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515
Using Mathematical Modeling and Set-Based Design Principles to Recommend an Existing CVL Design
2017-09-01
designs, it would be worth researching the feasibility of varying the launch method on some of the larger light aircraft carriers, such as the Liaoning...thesis examines the trade space in major design areas such as tonnage, aircraft launch method , propulsion, and performance in order to illustrate...future conflict. This thesis examines the trade space in major design areas such as tonnage, aircraft launch method , propulsion, and performance in
CometBoards Users Manual Release 1.0
NASA Technical Reports Server (NTRS)
Guptill, James D.; Coroneos, Rula M.; Patnaik, Surya N.; Hopkins, Dale A.; Berke, Lazlo
1996-01-01
Several nonlinear mathematical programming algorithms for structural design applications are available at present. These include the sequence of unconstrained minimizations technique, the method of feasible directions, and the sequential quadratic programming technique. The optimality criteria technique and the fully utilized design concept are two other structural design methods. A project was undertaken to bring all these design methods under a common computer environment so that a designer can select any one of these tools that may be suitable for his/her application. To facilitate selection of a design algorithm, to validate and check out the computer code, and to ascertain the relative merits of the design tools, modest finite element structural analysis programs based on the concept of stiffness and integrated force methods have been coupled to each design method. The code that contains both these design and analysis tools, by reading input information from analysis and design data files, can cast the design of a structure as a minimum-weight optimization problem. The code can then solve it with a user-specified optimization technique and a user-specified analysis method. This design code is called CometBoards, which is an acronym for Comparative Evaluation Test Bed of Optimization and Analysis Routines for the Design of Structures. This manual describes for the user a step-by-step procedure for setting up the input data files and executing CometBoards to solve a structural design problem. The manual includes the organization of CometBoards; instructions for preparing input data files; the procedure for submitting a problem; illustrative examples; and several demonstration problems. A set of 29 structural design problems have been solved by using all the optimization methods available in CometBoards. A summary of the optimum results obtained for these problems is appended to this users manual. CometBoards, at present, is available for Posix-based Cray and Convex computers, Iris and Sun workstations, and the VM/CMS system.
Engineering Design Education Program for Graduate School
NASA Astrophysics Data System (ADS)
Ohbuchi, Yoshifumi; Iida, Haruhiko
The new educational methods of engineering design have attempted to improve mechanical engineering education for graduate students in a way of the collaboration in education of engineer and designer. The education program is based on the lecture and practical exercises concerning the product design, and has engineering themes and design process themes, i.e. project management, QFD, TRIZ, robust design (Taguchi method) , ergonomics, usability, marketing, conception etc. At final exercise, all students were able to design new product related to their own research theme by applying learned knowledge and techniques. By the method of engineering design education, we have confirmed that graduate students are able to experience technological and creative interest.
Design component method for sensitivity analysis of built-up structures
NASA Technical Reports Server (NTRS)
Choi, Kyung K.; Seong, Hwai G.
1986-01-01
A 'design component method' that provides a unified and systematic organization of design sensitivity analysis for built-up structures is developed and implemented. Both conventional design variables, such as thickness and cross-sectional area, and shape design variables of components of built-up structures are considered. It is shown that design of components of built-up structures can be characterized and system design sensitivity expressions obtained by simply adding contributions from each component. The method leads to a systematic organization of computations for design sensitivity analysis that is similar to the way in which computations are organized within a finite element code.
Intelligent design of permanent magnet synchronous motor based on CBR
NASA Astrophysics Data System (ADS)
Li, Cong; Fan, Beibei
2018-05-01
Aiming at many problems in the design process of Permanent magnet synchronous motor (PMSM), such as the complexity of design process, the over reliance on designers' experience and the lack of accumulation and inheritance of design knowledge, a design method of PMSM Based on CBR is proposed in order to solve those problems. In this paper, case-based reasoning (CBR) methods of cases similarity calculation is proposed for reasoning suitable initial scheme. This method could help designers, by referencing previous design cases, to make a conceptual PMSM solution quickly. The case retain process gives the system self-enrich function which will improve the design ability of the system with the continuous use of the system.
New knowledge network evaluation method for design rationale management
NASA Astrophysics Data System (ADS)
Jing, Shikai; Zhan, Hongfei; Liu, Jihong; Wang, Kuan; Jiang, Hao; Zhou, Jingtao
2015-01-01
Current design rationale (DR) systems have not demonstrated the value of the approach in practice since little attention is put to the evaluation method of DR knowledge. To systematize knowledge management process for future computer-aided DR applications, a prerequisite is to provide the measure for the DR knowledge. In this paper, a new knowledge network evaluation method for DR management is presented. The method characterizes the DR knowledge value from four perspectives, namely, the design rationale structure scale, association knowledge and reasoning ability, degree of design justification support and degree of knowledge representation conciseness. The DR knowledge comprehensive value is also measured by the proposed method. To validate the proposed method, different style of DR knowledge network and the performance of the proposed measure are discussed. The evaluation method has been applied in two realistic design cases and compared with the structural measures. The research proposes the DR knowledge evaluation method which can provide object metric and selection basis for the DR knowledge reuse during the product design process. In addition, the method is proved to be more effective guidance and support for the application and management of DR knowledge.
Structural analysis at aircraft conceptual design stage
NASA Astrophysics Data System (ADS)
Mansouri, Reza
In the past 50 years, computers have helped by augmenting human efforts with tremendous pace. The aircraft industry is not an exception. Aircraft industry is more than ever dependent on computing because of a high level of complexity and the increasing need for excellence to survive a highly competitive marketplace. Designers choose computers to perform almost every analysis task. But while doing so, existing effective, accurate and easy to use classical analytical methods are often forgotten, which can be very useful especially in the early phases of the aircraft design where concept generation and evaluation demands physical visibility of design parameters to make decisions [39, 2004]. Structural analysis methods have been used by human beings since the very early civilization. Centuries before computers were invented; the pyramids were designed and constructed by Egyptians around 2000 B.C, the Parthenon was built by the Greeks, around 240 B.C, Dujiangyan was built by the Chinese. Persepolis, Hagia Sophia, Taj Mahal, Eiffel tower are only few more examples of historical buildings, bridges and monuments that were constructed before we had any advancement made in computer aided engineering. Aircraft industry is no exception either. In the first half of the 20th century, engineers used classical method and designed civil transport aircraft such as Ford Tri Motor (1926), Lockheed Vega (1927), Lockheed 9 Orion (1931), Douglas DC-3 (1935), Douglas DC-4/C-54 Skymaster (1938), Boeing 307 (1938) and Boeing 314 Clipper (1939) and managed to become airborne without difficulty. Evidencing, while advanced numerical methods such as the finite element analysis is one of the most effective structural analysis methods; classical structural analysis methods can also be as useful especially during the early phase of a fixed wing aircraft design where major decisions are made and concept generation and evaluation demands physical visibility of design parameters to make decisions. Considering the strength and limitations of both methodologies, the question to be answered in this thesis is: How valuable and compatible are the classical analytical methods in today's conceptual design environment? And can these methods complement each other? To answer these questions, this thesis investigates the pros and cons of classical analytical structural analysis methods during the conceptual design stage through the following objectives: Illustrate structural design methodology of these methods within the framework of Aerospace Vehicle Design (AVD) lab's design lifecycle. Demonstrate the effectiveness of moment distribution method through four case studies. This will be done by considering and evaluating the strength and limitation of these methods. In order to objectively quantify the limitation and capabilities of the analytical method at the conceptual design stage, each case study becomes more complex than the one before.
Torrens, George Edward
2018-01-01
Summative content analysis was used to define methods and heuristics from each case study. The review process was in two parts: (1) A literature review to identify conventional research methods and (2) a summative content analysis of published case studies, based on the identified methods and heuristics to suggest an order and priority of where and when were used. Over 200 research and design methods and design heuristics were identified. From the review of the 20 case studies 42 were identified as being applied. The majority of methods and heuristics were applied in phase two, market choice. There appeared a disparity between the limited numbers of methods frequently used, under 10 within the 20 case studies, when hundreds were available. Implications for Rehabilitation The communication highlights a number of issues that have implication for those involved in assistive technology new product development: •The study defined over 200 well-established research and design methods and design heuristics that are available for use by those who specify and design assistive technology products, which provide a comprehensive reference list for practitioners in the field; •The review within the study suggests only a limited number of research and design methods are regularly used by industrial design focused assistive technology new product developers; and, •Debate is required within the practitioners working in this field to reflect on how a wider range of potentially more effective methods and heuristics may be incorporated into daily working practice.
NASA Technical Reports Server (NTRS)
Merchant, D. H.
1976-01-01
Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occurring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the method are also presented.
Mixing Qualitative and Quantitative Methods: Insights into Design and Analysis Issues
ERIC Educational Resources Information Center
Lieber, Eli
2009-01-01
This article describes and discusses issues related to research design and data analysis in the mixing of qualitative and quantitative methods. It is increasingly desirable to use multiple methods in research, but questions arise as to how best to design and analyze the data generated by mixed methods projects. I offer a conceptualization for such…
NASA Astrophysics Data System (ADS)
Essameldin, Mahmoud; Fleischmann, Friedrich; Henning, Thomas; Lang, Walter
2017-02-01
Freeform optical systems are playing an important role in the field of illumination engineering for redistributing the light intensity, because of its capability of achieving accurate and efficient results. The authors have presented the basic idea of the freeform lens design method at the 117th annual meeting of the German Society of Applied Optics (DGAOProceedings). Now, we demonstrate the feasibility of the design method by designing and evaluating a freeform lens. The concepts of luminous intensity mapping, energy conservation and differential equation are combined in designing a lens for non-imaging applications. The required procedures to design a lens including the simulations are explained in detail. The optical performance is investigated by using a numerical simulation of optical ray tracing. For evaluation, the results are compared with another recently published design method, showing the accurate performance of the proposed method using a reduced number of mapping angles. As a part of the tolerance analyses of the fabrication processes, the influence of the light source misalignments (translation and orientation) on the beam-shaping performance is presented. Finally, the importance of considering the extended light source while designing a freeform lens using the proposed method is discussed.
NASA Astrophysics Data System (ADS)
Fan, Xiao-Ning; Zhi, Bo
2017-07-01
Uncertainties in parameters such as materials, loading, and geometry are inevitable in designing metallic structures for cranes. When considering these uncertainty factors, reliability-based design optimization (RBDO) offers a more reasonable design approach. However, existing RBDO methods for crane metallic structures are prone to low convergence speed and high computational cost. A unilevel RBDO method, combining a discrete imperialist competitive algorithm with an inverse reliability strategy based on the performance measure approach, is developed. Application of the imperialist competitive algorithm at the optimization level significantly improves the convergence speed of this RBDO method. At the reliability analysis level, the inverse reliability strategy is used to determine the feasibility of each probabilistic constraint at each design point by calculating its α-percentile performance, thereby avoiding convergence failure, calculation error, and disproportionate computational effort encountered using conventional moment and simulation methods. Application of the RBDO method to an actual crane structure shows that the developed RBDO realizes a design with the best tradeoff between economy and safety together with about one-third of the convergence speed and the computational cost of the existing method. This paper provides a scientific and effective design approach for the design of metallic structures of cranes.
RobOKoD: microbial strain design for (over)production of target compounds.
Stanford, Natalie J; Millard, Pierre; Swainston, Neil
2015-01-01
Sustainable production of target compounds such as biofuels and high-value chemicals for pharmaceutical, agrochemical, and chemical industries is becoming an increasing priority given their current dependency upon diminishing petrochemical resources. Designing these strains is difficult, with current methods focusing primarily on knocking-out genes, dismissing other vital steps of strain design including the overexpression and dampening of genes. The design predictions from current methods also do not translate well-into successful strains in the laboratory. Here, we introduce RobOKoD (Robust, Overexpression, Knockout and Dampening), a method for predicting strain designs for overproduction of targets. The method uses flux variability analysis to profile each reaction within the system under differing production percentages of target-compound and biomass. Using these profiles, reactions are identified as potential knockout, overexpression, or dampening targets. The identified reactions are ranked according to their suitability, providing flexibility in strain design for users. The software was tested by designing a butanol-producing Escherichia coli strain, and was compared against the popular OptKnock and RobustKnock methods. RobOKoD shows favorable design predictions, when predictions from these methods are compared to a successful butanol-producing experimentally-validated strain. Overall RobOKoD provides users with rankings of predicted beneficial genetic interventions with which to support optimized strain design.
RobOKoD: microbial strain design for (over)production of target compounds
Stanford, Natalie J.; Millard, Pierre; Swainston, Neil
2015-01-01
Sustainable production of target compounds such as biofuels and high-value chemicals for pharmaceutical, agrochemical, and chemical industries is becoming an increasing priority given their current dependency upon diminishing petrochemical resources. Designing these strains is difficult, with current methods focusing primarily on knocking-out genes, dismissing other vital steps of strain design including the overexpression and dampening of genes. The design predictions from current methods also do not translate well-into successful strains in the laboratory. Here, we introduce RobOKoD (Robust, Overexpression, Knockout and Dampening), a method for predicting strain designs for overproduction of targets. The method uses flux variability analysis to profile each reaction within the system under differing production percentages of target-compound and biomass. Using these profiles, reactions are identified as potential knockout, overexpression, or dampening targets. The identified reactions are ranked according to their suitability, providing flexibility in strain design for users. The software was tested by designing a butanol-producing Escherichia coli strain, and was compared against the popular OptKnock and RobustKnock methods. RobOKoD shows favorable design predictions, when predictions from these methods are compared to a successful butanol-producing experimentally-validated strain. Overall RobOKoD provides users with rankings of predicted beneficial genetic interventions with which to support optimized strain design. PMID:25853130
System and method of designing models in a feedback loop
Gosink, Luke C.; Pulsipher, Trenton C.; Sego, Landon H.
2017-02-14
A method and system for designing models is disclosed. The method includes selecting a plurality of models for modeling a common event of interest. The method further includes aggregating the results of the models and analyzing each model compared to the aggregate result to obtain comparative information. The method also includes providing the information back to the plurality of models to design more accurate models through a feedback loop.
The synthesis method for design of electron flow sources
NASA Astrophysics Data System (ADS)
Alexahin, Yu I.; Molodozhenzev, A. Yu
1997-01-01
The synthesis method to design a relativistic magnetically - focused beam source is described in this paper. It allows to find a shape of electrodes necessary to produce laminar space charge flows. Electron guns with shielded cathodes designed with this method were analyzed using the EGUN code. The obtained results have shown the coincidence of the synthesis and analysis calculations [1]. This method of electron gun calculation may be applied for immersed electron flows - of interest for the EBIS electron gun design.
User-Centred Design Using Gamestorming.
Currie, Leanne
2016-01-01
User-centered design (UX) is becoming a standard in software engineering and has tremendous potential in healthcare. The purpose of this tutorial will be to demonstrate and provide participants with practice in user-centred design methods that involve 'Gamestorming', a form of brainstorming where 'the rules of life are temporarily suspended'. Participants will learn and apply gamestorming methods including persona development via empathy mapping and methods to translate artefacts derived from participatory design sessions into functional and design requirements.
A systematic composite service design modeling method using graph-based theory.
Elhag, Arafat Abdulgader Mohammed; Mohamad, Radziah; Aziz, Muhammad Waqar; Zeshan, Furkh
2015-01-01
The composite service design modeling is an essential process of the service-oriented software development life cycle, where the candidate services, composite services, operations and their dependencies are required to be identified and specified before their design. However, a systematic service-oriented design modeling method for composite services is still in its infancy as most of the existing approaches provide the modeling of atomic services only. For these reasons, a new method (ComSDM) is proposed in this work for modeling the concept of service-oriented design to increase the reusability and decrease the complexity of system while keeping the service composition considerations in mind. Furthermore, the ComSDM method provides the mathematical representation of the components of service-oriented design using the graph-based theoryto facilitate the design quality measurement. To demonstrate that the ComSDM method is also suitable for composite service design modeling of distributed embedded real-time systems along with enterprise software development, it is implemented in the case study of a smart home. The results of the case study not only check the applicability of ComSDM, but can also be used to validate the complexity and reusability of ComSDM. This also guides the future research towards the design quality measurement such as using the ComSDM method to measure the quality of composite service design in service-oriented software system.
A Systematic Composite Service Design Modeling Method Using Graph-Based Theory
Elhag, Arafat Abdulgader Mohammed; Mohamad, Radziah; Aziz, Muhammad Waqar; Zeshan, Furkh
2015-01-01
The composite service design modeling is an essential process of the service-oriented software development life cycle, where the candidate services, composite services, operations and their dependencies are required to be identified and specified before their design. However, a systematic service-oriented design modeling method for composite services is still in its infancy as most of the existing approaches provide the modeling of atomic services only. For these reasons, a new method (ComSDM) is proposed in this work for modeling the concept of service-oriented design to increase the reusability and decrease the complexity of system while keeping the service composition considerations in mind. Furthermore, the ComSDM method provides the mathematical representation of the components of service-oriented design using the graph-based theoryto facilitate the design quality measurement. To demonstrate that the ComSDM method is also suitable for composite service design modeling of distributed embedded real-time systems along with enterprise software development, it is implemented in the case study of a smart home. The results of the case study not only check the applicability of ComSDM, but can also be used to validate the complexity and reusability of ComSDM. This also guides the future research towards the design quality measurement such as using the ComSDM method to measure the quality of composite service design in service-oriented software system. PMID:25928358
Learner Centred Design for a Hybrid Interaction Application
ERIC Educational Resources Information Center
Wood, Simon; Romero, Pablo
2010-01-01
Learner centred design methods highlight the importance of involving the stakeholders of the learning process (learners, teachers, educational researchers) at all stages of the design of educational applications and of refining the design through an iterative prototyping process. These methods have been used successfully when designing systems…
NASA Astrophysics Data System (ADS)
Sun, Li; Wang, Deyu
2011-09-01
A new multi-level analysis method of introducing the super-element modeling method, derived from the multi-level analysis method first proposed by O. F. Hughes, has been proposed in this paper to solve the problem of high time cost in adopting a rational-based optimal design method for ship structural design. Furthermore, the method was verified by its effective application in optimization of the mid-ship section of a container ship. A full 3-D FEM model of a ship, suffering static and quasi-static loads, was used as the analyzing object for evaluating the structural performance of the mid-ship module, including static strength and buckling performance. Research results reveal that this new method could substantially reduce the computational cost of the rational-based optimization problem without decreasing its accuracy, which increases the feasibility and economic efficiency of using a rational-based optimal design method in ship structural design.
Colquhoun, Heather L; Squires, Janet E; Kolehmainen, Niina; Fraser, Cynthia; Grimshaw, Jeremy M
2017-03-04
Systematic reviews consistently indicate that interventions to change healthcare professional (HCP) behaviour are haphazardly designed and poorly specified. Clarity about methods for designing and specifying interventions is needed. The objective of this review was to identify published methods for designing interventions to change HCP behaviour. A search of MEDLINE, Embase, and PsycINFO was conducted from 1996 to April 2015. Using inclusion/exclusion criteria, a broad screen of abstracts by one rater was followed by a strict screen of full text for all potentially relevant papers by three raters. An inductive approach was first applied to the included studies to identify commonalities and differences between the descriptions of methods across the papers. Based on this process and knowledge of related literatures, we developed a data extraction framework that included, e.g. level of change (e.g. individual versus organization); context of development; a brief description of the method; tasks included in the method (e.g. barrier identification, component selection, use of theory). 3966 titles and abstracts and 64 full-text papers were screened to yield 15 papers included in the review, each outlining one design method. All of the papers reported methods developed within a specific context. Thirteen papers included barrier identification and 13 included linking barriers to intervention components; although not the same 13 papers. Thirteen papers targeted individual HCPs with only one paper targeting change across individual, organization, and system levels. The use of theory and user engagement were included in 13/15 and 13/15 papers, respectively. There is an agreement across methods of four tasks that need to be completed when designing individual-level interventions: identifying barriers, selecting intervention components, using theory, and engaging end-users. Methods also consist of further additional tasks. Examples of methods for designing the organisation and system-level interventions were limited. Further analysis of design tasks could facilitate the development of detailed guidelines for designing interventions.
NASA Astrophysics Data System (ADS)
Alfadhlani; Samadhi, T. M. A. Ari; Ma’ruf, Anas; Setiasyah Toha, Isa
2018-03-01
Assembly is a part of manufacturing processes that must be considered at the product design stage. Design for Assembly (DFA) is a method to evaluate product design in order to make it simpler, easier and quicker to assemble, so that assembly cost is reduced. This article discusses a framework for developing a computer-based DFA method. The method is expected to aid product designer to extract data, evaluate assembly process, and provide recommendation for the product design improvement. These three things are desirable to be performed without interactive process or user intervention, so product design evaluation process could be done automatically. Input for the proposed framework is a 3D solid engineering drawing. Product design evaluation is performed by: minimizing the number of components; generating assembly sequence alternatives; selecting the best assembly sequence based on the minimum number of assembly reorientations; and providing suggestion for design improvement.
An Approach to the Constrained Design of Natural Laminar Flow Airfoils
NASA Technical Reports Server (NTRS)
Green, Bradford E.
1997-01-01
A design method has been developed by which an airfoil with a substantial amount of natural laminar flow can be designed, while maintaining other aerodynamic and geometric constraints. After obtaining the initial airfoil's pressure distribution at the design lift coefficient using an Euler solver coupled with an integral turbulent boundary layer method, the calculations from a laminar boundary layer solver are used by a stability analysis code to obtain estimates of the transition location (using N-Factors) for the starting airfoil. A new design method then calculates a target pressure distribution that will increase the laminar flow toward the desired amount. An airfoil design method is then iteratively used to design an airfoil that possesses that target pressure distribution. The new airfoil's boundary layer stability characteristics are determined, and this iterative process continues until an airfoil is designed that meets the laminar flow requirement and as many of the other constraints as possible.
An approach to the constrained design of natural laminar flow airfoils
NASA Technical Reports Server (NTRS)
Green, Bradford Earl
1995-01-01
A design method has been developed by which an airfoil with a substantial amount of natural laminar flow can be designed, while maintaining other aerodynamic and geometric constraints. After obtaining the initial airfoil's pressure distribution at the design lift coefficient using an Euler solver coupled with an integml turbulent boundary layer method, the calculations from a laminar boundary layer solver are used by a stability analysis code to obtain estimates of the transition location (using N-Factors) for the starting airfoil. A new design method then calculates a target pressure distribution that will increase the larninar flow toward the desired amounl An airfoil design method is then iteratively used to design an airfoil that possesses that target pressure distribution. The new airfoil's boundary layer stability characteristics are determined, and this iterative process continues until an airfoil is designed that meets the laminar flow requirement and as many of the other constraints as possible.
Optimization Design of Minimum Total Resistance Hull Form Based on CFD Method
NASA Astrophysics Data System (ADS)
Zhang, Bao-ji; Zhang, Sheng-long; Zhang, Hui
2018-06-01
In order to reduce the resistance and improve the hydrodynamic performance of a ship, two hull form design methods are proposed based on the potential flow theory and viscous flow theory. The flow fields are meshed using body-fitted mesh and structured grids. The parameters of the hull modification function are the design variables. A three-dimensional modeling method is used to alter the geometry. The Non-Linear Programming (NLP) method is utilized to optimize a David Taylor Model Basin (DTMB) model 5415 ship under the constraints, including the displacement constraint. The optimization results show an effective reduction of the resistance. The two hull form design methods developed in this study can provide technical support and theoretical basis for designing green ships.
Aerodynamic shape optimization using control theory
NASA Technical Reports Server (NTRS)
Reuther, James
1996-01-01
Aerodynamic shape design has long persisted as a difficult scientific challenge due its highly nonlinear flow physics and daunting geometric complexity. However, with the emergence of Computational Fluid Dynamics (CFD) it has become possible to make accurate predictions of flows which are not dominated by viscous effects. It is thus worthwhile to explore the extension of CFD methods for flow analysis to the treatment of aerodynamic shape design. Two new aerodynamic shape design methods are developed which combine existing CFD technology, optimal control theory, and numerical optimization techniques. Flow analysis methods for the potential flow equation and the Euler equations form the basis of the two respective design methods. In each case, optimal control theory is used to derive the adjoint differential equations, the solution of which provides the necessary gradient information to a numerical optimization method much more efficiently then by conventional finite differencing. Each technique uses a quasi-Newton numerical optimization algorithm to drive an aerodynamic objective function toward a minimum. An analytic grid perturbation method is developed to modify body fitted meshes to accommodate shape changes during the design process. Both Hicks-Henne perturbation functions and B-spline control points are explored as suitable design variables. The new methods prove to be computationally efficient and robust, and can be used for practical airfoil design including geometric and aerodynamic constraints. Objective functions are chosen to allow both inverse design to a target pressure distribution and wave drag minimization. Several design cases are presented for each method illustrating its practicality and efficiency. These include non-lifting and lifting airfoils operating at both subsonic and transonic conditions.
Multi-reader ROC studies with split-plot designs: a comparison of statistical methods.
Obuchowski, Nancy A; Gallas, Brandon D; Hillis, Stephen L
2012-12-01
Multireader imaging trials often use a factorial design, in which study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of this design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper, the authors compare three methods of analysis for the split-plot design. Three statistical methods are presented: the Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean analysis-of-variance approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power, and confidence interval coverage of the three test statistics. The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% confidence intervals falls close to the nominal coverage for small and large sample sizes. The split-plot multireader, multicase study design can be statistically efficient compared to the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rates, similar power, and nominal confidence interval coverage, are available for this study design. Copyright © 2012 AUR. All rights reserved.
Calibration of resistance factors for drilled shafts for the new FHWA design method.
DOT National Transportation Integrated Search
2013-01-01
The Load and Resistance Factor Design (LRFD) calibration of deep foundation in Louisiana was first completed for driven piles (LTRC Final Report 449) in May 2009 and then for drilled shafts using 1999 FHWA design method (ONeill and Reese method) (...
Global Design Optimization for Fluid Machinery Applications
NASA Technical Reports Server (NTRS)
Shyy, Wei; Papila, Nilay; Tucker, Kevin; Vaidyanathan, Raj; Griffin, Lisa
2000-01-01
Recent experiences in utilizing the global optimization methodology, based on polynomial and neural network techniques for fluid machinery design are summarized. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. Another advantage is that these methods do not need to calculate the sensitivity of each design variable locally. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables and methods for predicting the model performance. Examples of applications selected from rocket propulsion components including a supersonic turbine and an injector element and a turbulent flow diffuser are used to illustrate the usefulness of the global optimization method.
Participatory design of healthcare technology with children.
Sims, Tara
2018-02-12
Purpose There are many frameworks and methods for involving children in design research. Human-Computer Interaction provides rich methods for involving children when designing technologies. The paper aims to discuss these issues. Design/methodology/approach This paper examines various approaches to involving children in design, considering whether users view children as study objects or active participants. Findings The BRIDGE method is a sociocultural approach to product design that views children as active participants, enabling them to contribute to the design process as competent and resourceful partners. An example is provided, in which BRIDGE was successfully applied to developing upper limb prostheses with children. Originality/value Approaching design in this way can provide children with opportunities to develop social, academic and design skills and to develop autonomy.
Hybrid PV/diesel solar power system design using multi-level factor analysis optimization
NASA Astrophysics Data System (ADS)
Drake, Joshua P.
Solar power systems represent a large area of interest across a spectrum of organizations at a global level. It was determined that a clear understanding of current state of the art software and design methods, as well as optimization methods, could be used to improve the design methodology. Solar power design literature was researched for an in depth understanding of solar power system design methods and algorithms. Multiple software packages for the design and optimization of solar power systems were analyzed for a critical understanding of their design workflow. In addition, several methods of optimization were studied, including brute force, Pareto analysis, Monte Carlo, linear and nonlinear programming, and multi-way factor analysis. Factor analysis was selected as the most efficient optimization method for engineering design as it applied to solar power system design. The solar power design algorithms, software work flow analysis, and factor analysis optimization were combined to develop a solar power system design optimization software package called FireDrake. This software was used for the design of multiple solar power systems in conjunction with an energy audit case study performed in seven Tibetan refugee camps located in Mainpat, India. A report of solar system designs for the camps, as well as a proposed schedule for future installations was generated. It was determined that there were several improvements that could be made to the state of the art in modern solar power system design, though the complexity of current applications is significant.
Problem Solving Techniques for the Design of Algorithms.
ERIC Educational Resources Information Center
Kant, Elaine; Newell, Allen
1984-01-01
Presents model of algorithm design (activity in software development) based on analysis of protocols of two subjects designing three convex hull algorithms. Automation methods, methods for studying algorithm design, role of discovery in problem solving, and comparison of different designs of case study according to model are highlighted.…
Robust Airfoil Optimization to Achieve Consistent Drag Reduction Over a Mach Range
NASA Technical Reports Server (NTRS)
Li, Wu; Huyse, Luc; Padula, Sharon; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
We prove mathematically that in order to avoid point-optimization at the sampled design points for multipoint airfoil optimization, the number of design points must be greater than the number of free-design variables. To overcome point-optimization at the sampled design points, a robust airfoil optimization method (called the profile optimization method) is developed and analyzed. This optimization method aims at a consistent drag reduction over a given Mach range and has three advantages: (a) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (b) there is no random airfoil shape distortion for any iterate it generates, and (c) it allows a designer to make a trade-off between a truly optimized airfoil and the amount of computing time consumed. For illustration purposes, we use the profile optimization method to solve a lift-constrained drag minimization problem for 2-D airfoil in Euler flow with 20 free-design variables. A comparison with other airfoil optimization methods is also included.
Feng, Shuo
2014-01-01
Parallel excitation (pTx) techniques with multiple transmit channels have been widely used in high field MRI imaging to shorten the RF pulse duration and/or reduce the specific absorption rate (SAR). However, the efficiency of pulse design still needs substantial improvement for practical real-time applications. In this paper, we present a detailed description of a fast pulse design method with Fourier domain gridding and a conjugate gradient method. Simulation results of the proposed method show that the proposed method can design pTx pulses at an efficiency 10 times higher than that of the conventional conjugate-gradient based method, without reducing the accuracy of the desirable excitation patterns. PMID:24834420
Feng, Shuo; Ji, Jim
2014-04-01
Parallel excitation (pTx) techniques with multiple transmit channels have been widely used in high field MRI imaging to shorten the RF pulse duration and/or reduce the specific absorption rate (SAR). However, the efficiency of pulse design still needs substantial improvement for practical real-time applications. In this paper, we present a detailed description of a fast pulse design method with Fourier domain gridding and a conjugate gradient method. Simulation results of the proposed method show that the proposed method can design pTx pulses at an efficiency 10 times higher than that of the conventional conjugate-gradient based method, without reducing the accuracy of the desirable excitation patterns.
A new method named as Segment-Compound method of baffle design
NASA Astrophysics Data System (ADS)
Qin, Xing; Yang, Xiaoxu; Gao, Xin; Liu, Xishuang
2017-02-01
As the observation demand increased, the demand of the lens imaging quality rising. Segment- Compound baffle design method was proposed in this paper. Three traditional methods of baffle design they are characterized as Inside to Outside, Outside to Inside, and Mirror Symmetry. Through a transmission type of optical system, the four methods were used to design stray light suppression structure for it, respectively. Then, structures modeling simulation with Solidworks, CAXA, Tracepro, At last, point source transmittance (PST) curve lines were got to describe their performance. The result shows that the Segment- Compound method can inhibit stay light more effectively. Moreover, it is easy to active and without use special material.
Constrained Aerothermodynamic Design of Hypersonic Vehicles
NASA Technical Reports Server (NTRS)
Gally, Tom; Campbell, Dick
2002-01-01
An investigation was conducted into possible methods of incorporating a hypersonic design capability with aerothermodynamic constraints into the CDISC aerodynamic design tool. The work was divided into two distinct phases: develop relations between surface curvature and hypersonic pressure coefficient which are compatible with CDISC's direct-iterative design method; and explore and implement possible methods of constraining the heat transfer rate over all or portions of the design surface. The main problem in implementing this method has been the weak relationship between surface shape and pressure coefficient at the stagnation point and the need to design around the surface blunt leading edge where there is a slope singularity. The final results show that some success has been achieved, but further improvements are needed.
Computational predictive methods for fracture and fatigue
NASA Technical Reports Server (NTRS)
Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.
1994-01-01
The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.
Computational predictive methods for fracture and fatigue
NASA Astrophysics Data System (ADS)
Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.
1994-09-01
The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.
Field Guide for Designing Human Interaction with Intelligent Systems
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Thronesbery, Carroll G.
1998-01-01
The characteristics of this Field Guide approach address the problems of designing innovative software to support user tasks. The requirements for novel software are difficult to specify a priori, because there is not sufficient understanding of how the users' tasks should be supported, and there are not obvious pre-existing design solutions. When the design team is in unfamiliar territory, care must be taken to avoid rushing into detailed design, requirements specification, or implementation of the wrong product. The challenge is to get the right design and requirements in an efficient, cost-effective manner. This document's purpose is to describe the methods we are using to design human interactions with intelligent systems which support Space Shuttle flight controllers in the Mission Control Center at NASA/Johnson Space Center. Although these software systems usually have some intelligent features, the design challenges arise primarily from the innovation needed in the software design. While these methods are tailored to our specific context, they should be extensible, and helpful to designers of human interaction with other types of automated systems. We review the unique features of this context so that you can determine how to apply these methods to your project Throughout this Field Guide, goals of the design methods are discussed. This should help designers understand how a specific method might need to be adapted to the project at hand.
A space radiation transport method development
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.
2004-01-01
Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest-order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard finite element method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 ms and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of re-configurable computing and could be utilized in the final design as verification of the deterministic method optimized design. Published by Elsevier Ltd on behalf of COSPAR.
Sanchez-Lite, Alberto; Garcia, Manuel; Domingo, Rosario; Angel Sebastian, Miguel
2013-01-01
Musculoskeletal disorders (MSDs) that result from poor ergonomic design are one of the occupational disorders of greatest concern in the industrial sector. A key advantage in the primary design phase is to focus on a method of assessment that detects and evaluates the potential risks experienced by the operative when faced with these types of physical injuries. The method of assessment will improve the process design identifying potential ergonomic improvements from various design alternatives or activities undertaken as part of the cycle of continuous improvement throughout the differing phases of the product life cycle. This paper presents a novel postural assessment method (NERPA) fit for product-process design, which was developed with the help of a digital human model together with a 3D CAD tool, which is widely used in the aeronautic and automotive industries. The power of 3D visualization and the possibility of studying the actual assembly sequence in a virtual environment can allow the functional performance of the parts to be addressed. Such tools can also provide us with an ergonomic workstation design, together with a competitive advantage in the assembly process. The method developed was used in the design of six production lines, studying 240 manual assembly operations and improving 21 of them. This study demonstrated the proposed method's usefulness and found statistically significant differences in the evaluations of the proposed method and the widely used Rapid Upper Limb Assessment (RULA) method.
Research on the Bionics Design of Automobile Styling Based on the Form Gene
NASA Astrophysics Data System (ADS)
Aili, Zhao; Long, Jiang
2017-09-01
From the heritage of form gene point of view, this thesis has analyzed the gene make-up, cultural inheritance and aesthetic features in the evolution and development of forms of brand automobiles and proposed the bionic design concept and methods in the automobile styling design. And this innovative method must be based on the form gene, and the consistency and combination of form element must be maintained during the design. Taking the design of Maserati as an example, the thesis will show you the design method and philosophy in the aspects of form gene expression and bionic design innovation for the future automobile styling.
Design of Aspirated Compressor Blades Using Three-dimensional Inverse Method
NASA Technical Reports Server (NTRS)
Dang, T. Q.; Rooij, M. Van; Larosiliere, L. M.
2003-01-01
A three-dimensional viscous inverse method is extended to allow blading design with full interaction between the prescribed pressure-loading distribution and a specified transpiration scheme. Transpiration on blade surfaces and endwalls is implemented as inflow/outflow boundary conditions, and the basic modifications to the method are outlined. This paper focuses on a discussion concerning an application of the method to the design and analysis of a supersonic rotor with aspiration. Results show that an optimum combination of pressure-loading tailoring with surface aspiration can lead to a minimization of the amount of sucked flow required for a net performance improvement at design and off-design operations.
Active controls: A look at analytical methods and associated tools
NASA Technical Reports Server (NTRS)
Newsom, J. R.; Adams, W. M., Jr.; Mukhopadhyay, V.; Tiffany, S. H.; Abel, I.
1984-01-01
A review of analytical methods and associated tools for active controls analysis and design problems is presented. Approaches employed to develop mathematical models suitable for control system analysis and/or design are discussed. Significant efforts have been expended to develop tools to generate the models from the standpoint of control system designers' needs and develop the tools necessary to analyze and design active control systems. Representative examples of these tools are discussed. Examples where results from the methods and tools have been compared with experimental data are also presented. Finally, a perspective on future trends in analysis and design methods is presented.
Category's analysis and operational project capacity method of transformation in design
NASA Astrophysics Data System (ADS)
Obednina, S. V.; Bystrova, T. Y.
2015-10-01
The method of transformation is attracting widespread interest in fields such contemporary design. However, in theory of design little attention has been paid to a categorical status of the term "transformation". This paper presents the conceptual analysis of transformation based on the theory of form employed in the influential essays by Aristotle and Thomas Aquinas. In the present work the transformation as a method of shaping design has been explored as well as potential application of this term in design has been demonstrated.
The role of the optimization process in illumination design
NASA Astrophysics Data System (ADS)
Gauvin, Michael A.; Jacobsen, David; Byrne, David J.
2015-07-01
This paper examines the role of the optimization process in illumination design. We will discuss why the starting point of the optimization process is crucial to a better design and why it is also important that the user understands the basic design problem and implements the correct merit function. Both a brute force method and the Downhill Simplex method will be used to demonstrate optimization methods with focus on using interactive design tools to create better starting points to streamline the optimization process.
Design and fabrication of planar structures with graded electromagnetic properties
NASA Astrophysics Data System (ADS)
Good, Brandon Lowell
Successfully integrating electromagnetic properties in planar structures offers numerous benefits to the microwave and optical communities. This work aims at formulating new analytic and optimized design methods, creating new fabrication techniques for achieving those methods, and matching appropriate implementation of methods to fabrication techniques. The analytic method consists of modifying an approach that realizes perfect antireflective properties from graded profiles. This method is shown for all-dielectric and magneto-dielectric grading profiles. The optimized design methods are applied to transformer (discrete) or taper (continuous) designs. From these methods, a subtractive and an additive manufacturing technique were established and are described. The additive method, dry powder dot deposition, enables three dimensional varying electromagnetic properties in a structural composite. Combining the methods and fabrication is shown in two applied methodologies. The first uses dry powder dot deposition to design one dimensionally graded electromagnetic profiles in a planar fiberglass composite. The second method simultaneously applies antireflective properties and adjusts directivity through a slab through the use of subwavelength structures to achieve a flat antireflective lens. The end result of this work is a complete set of methods, formulations, and fabrication techniques to achieve integrated electromagnetic properties in planar structures.
A Method for the Constrained Design of Natural Laminar Flow Airfoils
NASA Technical Reports Server (NTRS)
Green, Bradford E.; Whitesides, John L.; Campbell, Richard L.; Mineck, Raymond E.
1996-01-01
A fully automated iterative design method has been developed by which an airfoil with a substantial amount of natural laminar flow can be designed, while maintaining other aerodynamic and geometric constraints. Drag reductions have been realized using the design method over a range of Mach numbers, Reynolds numbers and airfoil thicknesses. The thrusts of the method are its ability to calculate a target N-Factor distribution that forces the flow to undergo transition at the desired location; the target-pressure-N-Factor relationship that is used to reduce the N-Factors in order to prolong transition; and its ability to design airfoils to meet lift, pitching moment, thickness and leading-edge radius constraints while also being able to meet the natural laminar flow constraint. The method uses several existing CFD codes and can design a new airfoil in only a few days using a Silicon Graphics IRIS workstation.
A comparison of methods for DPLL loop filter design
NASA Technical Reports Server (NTRS)
Aguirre, S.; Hurd, W. J.; Kumar, R.; Statman, J.
1986-01-01
Four design methodologies for loop filters for a class of digital phase-locked loops (DPLLs) are presented. The first design maps an optimum analog filter into the digital domain; the second approach designs a filter that minimizes in discrete time weighted combination of the variance of the phase error due to noise and the sum square of the deterministic phase error component; the third method uses Kalman filter estimation theory to design a filter composed of a least squares fading memory estimator and a predictor. The last design relies on classical theory, including rules for the design of compensators. Linear analysis is used throughout the article to compare different designs, and includes stability, steady state performance and transient behavior of the loops. Design methodology is not critical when the loop update rate can be made high relative to loop bandwidth, as the performance approaches that of continuous time. For low update rates, however, the miminization method is significantly superior to the other methods.
NASA Technical Reports Server (NTRS)
Yang, Y. L.; Tan, C. S.; Hawthorne, W. R.
1992-01-01
A computational method, based on a theory for turbomachinery blading design in three-dimensional inviscid flow, is applied to a parametric design study of a radial inflow turbine wheel. As the method requires the specification of swirl distribution, a technique for its smooth generation within the blade region is proposed. Excellent agreements have been obtained between the computed results from this design method and those from direct Euler computations, demonstrating the correspondence and consistency between the two. The computed results indicate the sensitivity of the pressure distribution to a lean in the stacking axis and a minor alteration in the hub/shroud profiles. Analysis based on Navier-Stokes solver shows no breakdown of flow within the designed blade passage and agreement with that from design calculation; thus the flow in the designed turbine rotor closely approximates that of an inviscid one. These calculations illustrate the use of a design method coupled to an analysis tool for establishing guidelines and criteria for designing turbomachinery blading.
Design sensitivity analysis using EAL. Part 1: Conventional design parameters
NASA Technical Reports Server (NTRS)
Dopker, B.; Choi, Kyung K.; Lee, J.
1986-01-01
A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.
14 CFR 161.9 - Designation of noise description methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Designation of noise description methods... TRANSPORTATION (CONTINUED) AIRPORTS NOTICE AND APPROVAL OF AIRPORT NOISE AND ACCESS RESTRICTIONS General Provisions § 161.9 Designation of noise description methods. For purposes of this part, the following...
14 CFR 161.9 - Designation of noise description methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Designation of noise description methods... TRANSPORTATION (CONTINUED) AIRPORTS NOTICE AND APPROVAL OF AIRPORT NOISE AND ACCESS RESTRICTIONS General Provisions § 161.9 Designation of noise description methods. For purposes of this part, the following...
Guidelines for the design of subsurface drainage systems for highway structural sections
DOT National Transportation Integrated Search
1972-06-01
Design criteria and a design method for pavement subsurface drainage systems include inflow-outflow method of analysis, open graded drainage layers, collector drains, pipe outlets and markers. Design examples are given for embankment sections, cut se...
Shape design sensitivity analysis using domain information
NASA Technical Reports Server (NTRS)
Seong, Hwal-Gyeong; Choi, Kyung K.
1985-01-01
A numerical method for obtaining accurate shape design sensitivity information for built-up structures is developed and demonstrated through analysis of examples. The basic character of the finite element method, which gives more accurate domain information than boundary information, is utilized for shape design sensitivity improvement. A domain approach for shape design sensitivity analysis of built-up structures is derived using the material derivative idea of structural mechanics and the adjoint variable method of design sensitivity analysis. Velocity elements and B-spline curves are introduced to alleviate difficulties in generating domain velocity fields. The regularity requirements of the design velocity field are studied.
ERIC Educational Resources Information Center
Sahrir, Muhammad Sabri; Alias, Nor Aziah; Ismail, Zawawi; Osman, Nurulhuda
2012-01-01
The design and development research, first proposed by Brown and Collins in the 1990s, is currently among the well-known methods in educational research to test theory and validate its practicality. The method is also known as developmental research, design research, design-based research, formative research and design-cased and possesses…
Quantifying electrical impacts on redundant wire insertion in 7nm unidirectional designs
NASA Astrophysics Data System (ADS)
Mohyeldin, Ahmed; Schroeder, Uwe Paul; Srinivasan, Ramya; Narisetty, Haritez; Malik, Shobhit; Madhavan, Sriram
2017-04-01
In nano-meter scale Integrated Circuits, via fails due to random defects is a well-known yield detractor, and via redundancy insertion is a common method to help enhance semiconductors yield. For the case of Self Aligned Double Patterning (SADP), which might require unidirectional design layers as in the case of some advanced technology nodes, the conventional methods of inserting redundant vias don't work any longer. This is because adding redundant vias conventionally requires adding metal shapes in the non-preferred direction, which will violate the SADP design constraints in that case. Therefore, such metal layers fabricated using unidirectional SADP require an alternative method for providing the needed redundancy. This paper proposes a post-layout Design for Manufacturability (DFM) redundancy insertion method tailored for the design requirements introduced by unidirectional metal layers. The proposed method adds redundant wires in the preferred direction - after searching for nearby vacant routing tracks - in order to provide redundant paths for electrical signals. This method opportunistically adds robustness against failures due to silicon defects without impacting area or incurring new design rule violations. Implementation details of this redundancy insertion method will be explained in this paper. One known challenge with similar DFM layout fixing methods is the possible introduction of undesired electrical impact, causing other unintentional failures in design functionality. In this paper, a study is presented to quantify the electrical impacts of such redundancy insertion scheme and to examine if that electrical impact can be tolerated. The paper will show results to evaluate DFM insertion rates and corresponding electrical impact for a given design utilization and maximum inserted wire length. Parasitic extraction and static timing analysis results will be presented. A typical digital design implemented using GLOBALFOUNDRIES 7nm technology is used for demonstration. The provided results can help evaluate such extensive DFM insertion method from an electrical standpoint. Furthermore, the results could provide guidance on how to implement the proposed method of adding electrical redundancy such that intolerable electrical impacts could be avoided.
Eckermann, Simon; Karnon, Jon; Willan, Andrew R
2010-01-01
Value of information (VOI) methods have been proposed as a systematic approach to inform optimal research design and prioritization. Four related questions arise that VOI methods could address. (i) Is further research for a health technology assessment (HTA) potentially worthwhile? (ii) Is the cost of a given research design less than its expected value? (iii) What is the optimal research design for an HTA? (iv) How can research funding be best prioritized across alternative HTAs? Following Occam's razor, we consider the usefulness of VOI methods in informing questions 1-4 relative to their simplicity of use. Expected value of perfect information (EVPI) with current information, while simple to calculate, is shown to provide neither a necessary nor a sufficient condition to address question 1, given that what EVPI needs to exceed varies with the cost of research design, which can vary from very large down to negligible. Hence, for any given HTA, EVPI does not discriminate, as it can be large and further research not worthwhile or small and further research worthwhile. In contrast, each of questions 1-4 are shown to be fully addressed (necessary and sufficient) where VOI methods are applied to maximize expected value of sample information (EVSI) minus expected costs across designs. In comparing complexity in use of VOI methods, applying the central limit theorem (CLT) simplifies analysis to enable easy estimation of EVSI and optimal overall research design, and has been shown to outperform bootstrapping, particularly with small samples. Consequently, VOI methods applying the CLT to inform optimal overall research design satisfy Occam's razor in both improving decision making and reducing complexity. Furthermore, they enable consideration of relevant decision contexts, including option value and opportunity cost of delay, time, imperfect implementation and optimal design across jurisdictions. More complex VOI methods such as bootstrapping of the expected value of partial EVPI may have potential value in refining overall research design. However, Occam's razor must be seriously considered in application of these VOI methods, given their increased complexity and current limitations in informing decision making, with restriction to EVPI rather than EVSI and not allowing for important decision-making contexts. Initial use of CLT methods to focus these more complex partial VOI methods towards where they may be useful in refining optimal overall trial design is suggested. Integrating CLT methods with such partial VOI methods to allow estimation of partial EVSI is suggested in future research to add value to the current VOI toolkit.
Sui, Sai; Ma, Hua; Lv, Yueguang; Wang, Jiafu; Li, Zhiqiang; Zhang, Jieqiu; Xu, Zhuo; Qu, Shaobo
2018-01-22
Arbitrary control of electromagnetic waves remains a significant challenge although it promises many important applications. Here, we proposed a fast optimization method of designing a wideband metasurface without using the Pancharatnam-Berry (PB) phase, of which the elements are non-absorptive and capable of predicting the wideband and smooth phase-shift. In our design method, the metasurface is composed of low-Q-factor resonant elements without using the PB phase, and is optimized by the genetic algorithm and nonlinear fitting method, having the advantages that the far field scattering patterns can be quickly synthesized by the hybrid array patterns. To validate the design method, a wideband low radar cross section metasurface is demonstrated, showing good feasibility and performance of wideband RCS reduction. This work reveals an opportunity arising from a metasurface in effective manipulation of microwave and flexible fast optimal design method.
NASA Technical Reports Server (NTRS)
English, Robert E; Cavicchi, Richard H
1951-01-01
Empirical methods of Ainley and Kochendorfer and Nettles were used to predict performances of nine turbine designs. Measured and predicted performances were compared. Appropriate values of blade-loss parameter were determined for the method of Kochendorfer and Nettles. The measured design-point efficiencies were lower than predicted by as much as 0.09 (Ainley and 0.07 (Kochendorfer and Nettles). For the method of Kochendorfer and Nettles, appropriate values of blade-loss parameter ranged from 0.63 to 0.87 and the off-design performance was accurately predicted.
NASA Technical Reports Server (NTRS)
Zang, Thomas A.; Hemsch, Michael J.; Hilburger, Mark W.; Kenny, Sean P; Luckring, James M.; Maghami, Peiman; Padula, Sharon L.; Stroud, W. Jefferson
2002-01-01
This report consists of a survey of the state of the art in uncertainty-based design together with recommendations for a Base research activity in this area for the NASA Langley Research Center. This report identifies the needs and opportunities for computational and experimental methods that provide accurate, efficient solutions to nondeterministic multidisciplinary aerospace vehicle design problems. Barriers to the adoption of uncertainty-based design methods are identified. and the benefits of the use of such methods are explained. Particular research needs are listed.
A Framework to Determine New System Requirements Under Design Parameter and Demand Uncertainties
2015-04-30
relegates quantitative complexities of decision-making to the method and designates trade-space exploration to the practitioner. We demonstrate the...quantitative complexities of decision-making to the method and designates trade-space exploration to the practitioner. We demonstrate the approach...play a critical role in determining new system requirements. Scope and Method of Approach The early stages of the design process have substantial
Integrated Research/Education University Aircraft Design Program Development
2017-04-06
iterations and loop shaping compared to MIMO control methods. Despite the drawbacks, loop closure and classical methods are the design methods most commonly...AFRL-AFOSR-VA-TR-2017-0077 Integrated Research/Education University Aircraft Design Program Development Eli Livne UNIVERSITY OF WASHINGTON 4333...SUBTITLE Integrated Research/Education University Aircraft Design Program Development 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-14-1-0027 5c. PROGRAM
Material selection and assembly method of battery pack for compact electric vehicle
NASA Astrophysics Data System (ADS)
Lewchalermwong, N.; Masomtob, M.; Lailuck, V.; Charoenphonphanich, C.
2018-01-01
Battery packs become the key component in electric vehicles (EVs). The main costs of which are battery cells and assembling processes. The battery cell is indeed priced from battery manufacturers while the assembling cost is dependent on battery pack designs. Battery pack designers need overall cost as cheap as possible, but it still requires high performance and more safety. Material selection and assembly method as well as component design are very important to determine the cost-effectiveness of battery modules and battery packs. Therefore, this work presents Decision Matrix, which can aid in the decision-making process of component materials and assembly methods for a battery module design and a battery pack design. The aim of this study is to take the advantage of incorporating Architecture Analysis method into decision matrix methods by capturing best practices for conducting design architecture analysis in full account of key design components critical to ensure efficient and effective development of the designs. The methodology also considers the impacts of choice-alternatives along multiple dimensions. Various alternatives for materials and assembly techniques of battery pack are evaluated, and some sample costs are presented. Due to many components in the battery pack, only seven components which are positive busbar and Z busbar are represented in this paper for using decision matrix methods.
A Comparison of the Effectiveness of Two Design Methodologies in a Secondary School Setting.
ERIC Educational Resources Information Center
Cannizzaro, Brenton; Boughton, Doug
1998-01-01
Examines the effectiveness of the analysis-synthesis and generator-conjuncture-analysis models of design education. Concludes that the generator-conjecture-analysis design method produced student design product of a slightly higher standard than the analysis-synthesis design method. Discusses the findings in more detail and considers implications.…
Aerodynamic design using numerical optimization
NASA Technical Reports Server (NTRS)
Murman, E. M.; Chapman, G. T.
1983-01-01
The procedure of using numerical optimization methods coupled with computational fluid dynamic (CFD) codes for the development of an aerodynamic design is examined. Several approaches that replace wind tunnel tests, develop pressure distributions and derive designs, or fulfill preset design criteria are presented. The method of Aerodynamic Design by Numerical Optimization (ADNO) is described and illustrated with examples.
Topology optimization based design of unilateral NMR for generating a remote homogeneous field.
Wang, Qi; Gao, Renjing; Liu, Shutian
2017-06-01
This paper presents a topology optimization based design method for the design of unilateral nuclear magnetic resonance (NMR), with which a remote homogeneous field can be obtained. The topology optimization is actualized by seeking out the optimal layout of ferromagnetic materials within a given design domain. The design objective is defined as generating a sensitive magnetic field with optimal homogeneity and maximal field strength within a required region of interest (ROI). The sensitivity of the objective function with respect to the design variables is derived and the method for solving the optimization problem is presented. A design example is provided to illustrate the utility of the design method, specifically the ability to improve the quality of the magnetic field over the required ROI by determining the optimal structural topology for the ferromagnetic poles. Both in simulations and experiments, the sensitive region of the magnetic field achieves about 2 times larger than that of the reference design, validating validates the feasibility of the design method. Copyright © 2017. Published by Elsevier Inc.
Dai, Sheng-Yun; Xu, Bing; Shi, Xin-Yuan; Xu, Xiang; Sun, Ying-Qiang; Qiao, Yan-Jiang
2017-03-01
This study is aimed to propose a continual improvement strategy based on quality by design (QbD). An ultra high performance liquid chromatography (UPLC) method was developed to accomplish the method transformation from HPLC to UPLC of Panax notogineng saponins (PNS) and achieve the continual improvement of PNS based on QbD, for example. Plackett-Burman screening design and Box-Behnken optimization design were employed to further understand the relationship between the critical method parameters (CMPs) and critical method attributes (CMAs). And then the Bayesian design space was built. The separation degree of the critical peaks (ginsenoside Rg₁ and ginsenoside Re) was over 2.0 and the analysis time was less than 17 min by a method chosen from the design space with 20% of the initial concentration of the acetonitrile, 10 min of the isocratic time and 6%•min⁻¹ of the gradient slope. At last, the optimum method was validated by accuracy profile. Based on the same analytical target profile (ATP), the comparison of HPLC and UPLC including chromatograph method, CMA identification, CMP-CMA model and system suitability test (SST) indicated that the UPLC method could shorten the analysis time, improve the critical separation and satisfy the requirement of the SST. In all, HPLC method could be replaced by UPLC for the quantity analysis of PNS. Copyright© by the Chinese Pharmaceutical Association.
Probabilistic Design Storm Method for Improved Flood Estimation in Ungauged Catchments
NASA Astrophysics Data System (ADS)
Berk, Mario; Å pačková, Olga; Straub, Daniel
2017-12-01
The design storm approach with event-based rainfall-runoff models is a standard method for design flood estimation in ungauged catchments. The approach is conceptually simple and computationally inexpensive, but the underlying assumptions can lead to flawed design flood estimations. In particular, the implied average recurrence interval (ARI) neutrality between rainfall and runoff neglects uncertainty in other important parameters, leading to an underestimation of design floods. The selection of a single representative critical rainfall duration in the analysis leads to an additional underestimation of design floods. One way to overcome these nonconservative approximations is the use of a continuous rainfall-runoff model, which is associated with significant computational cost and requires rainfall input data that are often not readily available. As an alternative, we propose a novel Probabilistic Design Storm method that combines event-based flood modeling with basic probabilistic models and concepts from reliability analysis, in particular the First-Order Reliability Method (FORM). The proposed methodology overcomes the limitations of the standard design storm approach, while utilizing the same input information and models without excessive computational effort. Additionally, the Probabilistic Design Storm method allows deriving so-called design charts, which summarize representative design storm events (combinations of rainfall intensity and other relevant parameters) for floods with different return periods. These can be used to study the relationship between rainfall and runoff return periods. We demonstrate, investigate, and validate the method by means of an example catchment located in the Bavarian Pre-Alps, in combination with a simple hydrological model commonly used in practice.
ATLAS, an integrated structural analysis and design system. Volume 6: Design module theory
NASA Technical Reports Server (NTRS)
Backman, B. F.
1979-01-01
The automated design theory underlying the operation of the ATLAS Design Module is decribed. The methods, applications and limitations associated with the fully stressed design, the thermal fully stressed design and a regional optimization algorithm are presented. A discussion of the convergence characteristics of the fully stressed design is also included. Derivations and concepts specific to the ATLAS design theory are shown, while conventional terminology and established methods are identified by references.
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj; Nystrom, G. A.; Bardina, J.; Lombard, C. K.
1987-01-01
This paper describes the application of the conservative supra characteristic method (CSCM) to predict the flow around two-dimensional slot injection cooled cavities in hypersonic flow. Seven different numerical solutions are presented that model three different experimental designs. The calculations manifest outer flow conditions including the effects of nozzle/lip geometry, angle of attack, nozzle inlet conditions, boundary and shear layer growth and turbulance on the surrounding flow. The calculations were performed for analysis prior to wind tunnel testing for sensitivity studies early in the design process. Qualitative and quantitative understanding of the flows for each of the cavity designs and design recommendations are provided. The present paper demonstrates the ability of numerical schemes, such as the CSCM method, to play a significant role in the design process.
Automated design of genetic toggle switches with predetermined bistability.
Chen, Shuobing; Zhang, Haoqian; Shi, Handuo; Ji, Weiyue; Feng, Jingchen; Gong, Yan; Yang, Zhenglin; Ouyang, Qi
2012-07-20
Synthetic biology aims to rationally construct biological devices with required functionalities. Methods that automate the design of genetic devices without post-hoc adjustment are therefore highly desired. Here we provide a method to predictably design genetic toggle switches with predetermined bistability. To accomplish this task, a biophysical model that links ribosome binding site (RBS) DNA sequence to toggle switch bistability was first developed by integrating a stochastic model with RBS design method. Then, to parametrize the model, a library of genetic toggle switch mutants was experimentally built, followed by establishing the equivalence between RBS DNA sequences and switch bistability. To test this equivalence, RBS nucleotide sequences for different specified bistabilities were in silico designed and experimentally verified. Results show that the deciphered equivalence is highly predictive for the toggle switch design with predetermined bistability. This method can be generalized to quantitative design of other probabilistic genetic devices in synthetic biology.
NASA Astrophysics Data System (ADS)
Villanueva Perez, Carlos Hernan
Computational design optimization provides designers with automated techniques to develop novel and non-intuitive optimal designs. Topology optimization is a design optimization technique that allows for the evolution of a broad variety of geometries in the optimization process. Traditional density-based topology optimization methods often lack a sufficient resolution of the geometry and physical response, which prevents direct use of the optimized design in manufacturing and the accurate modeling of the physical response of boundary conditions. The goal of this thesis is to introduce a unified topology optimization framework that uses the Level Set Method (LSM) to describe the design geometry and the eXtended Finite Element Method (XFEM) to solve the governing equations and measure the performance of the design. The methodology is presented as an alternative to density-based optimization approaches, and is able to accommodate a broad range of engineering design problems. The framework presents state-of-the-art methods for immersed boundary techniques to stabilize the systems of equations and enforce the boundary conditions, and is studied with applications in 2D and 3D linear elastic structures, incompressible flow, and energy and species transport problems to test the robustness and the characteristics of the method. A comparison of the framework against density-based topology optimization approaches is studied with regards to convergence, performance, and the capability to manufacture the designs. Furthermore, the ability to control the shape of the design to operate within manufacturing constraints is developed and studied. The analysis capability of the framework is validated quantitatively through comparison against previous benchmark studies, and qualitatively through its application to topology optimization problems. The design optimization problems converge to intuitive designs and resembled well the results from previous 2D or density-based studies.
Simulation methods to estimate design power: an overview for applied research
2011-01-01
Background Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. Methods We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. Results We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Conclusions Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research. PMID:21689447
TLNS3D/CDISC Multipoint Design of the TCA Concept
NASA Technical Reports Server (NTRS)
Campbell, Richard L.; Mann, Michael J.
1999-01-01
This paper presents the work done to date by the authors on developing an efficient approach to multipoint design and applying it to the design of the HSR TCA (High Speed Research Technology Concept Aircraft) configuration. While the title indicates that this exploratory study has been performed using the TLNS3DMB flow solver and the CDISC (Constrained Direct Iterative Surface Curvature) design method, the CDISC method could have been used with any flow solver, and the multipoint design approach does not require the use of CDISC. The goal of the study was to develop a multipoint design method that could achieve a design in about the same time as 10 analysis runs.
Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry
NASA Technical Reports Server (NTRS)
Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.
2004-01-01
Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.
Novel methodology for wide-ranged multistage morphing waverider based on conical theory
NASA Astrophysics Data System (ADS)
Liu, Zhen; Liu, Jun; Ding, Feng; Xia, Zhixun
2017-11-01
This study proposes the wide-ranged multistage morphing waverider design method. The flow field structure and aerodynamic characteristics of multistage waveriders are also analyzed. In this method, the multistage waverider is generated in the same conical flowfield, which contains a free-stream surface and different compression-stream surfaces. The obtained results show that the introduction of the multistage waverider design method can solve the problem of aerodynamic performance deterioration in the off-design state and allow the vehicle to always maintain the optimal flight state. The multistage waverider design method, combined with transfiguration flight strategy, can lead to greater design flexibility and the optimization of hypersonic wide-ranged waverider vehicles.
A probabilistic approach to aircraft design emphasizing stability and control uncertainties
NASA Astrophysics Data System (ADS)
Delaurentis, Daniel Andrew
In order to address identified deficiencies in current approaches to aerospace systems design, a new method has been developed. This new method for design is based on the premise that design is a decision making activity, and that deterministic analysis and synthesis can lead to poor, or misguided decision making. This is due to a lack of disciplinary knowledge of sufficient fidelity about the product, to the presence of uncertainty at multiple levels of the aircraft design hierarchy, and to a failure to focus on overall affordability metrics as measures of goodness. Design solutions are desired which are robust to uncertainty and are based on the maximum knowledge possible. The new method represents advances in the two following general areas. 1. Design models and uncertainty. The research performed completes a transition from a deterministic design representation to a probabilistic one through a modeling of design uncertainty at multiple levels of the aircraft design hierarchy, including: (1) Consistent, traceable uncertainty classification and representation; (2) Concise mathematical statement of the Probabilistic Robust Design problem; (3) Variants of the Cumulative Distribution Functions (CDFs) as decision functions for Robust Design; (4) Probabilistic Sensitivities which identify the most influential sources of variability. 2. Multidisciplinary analysis and design. Imbedded in the probabilistic methodology is a new approach for multidisciplinary design analysis and optimization (MDA/O), employing disciplinary analysis approximations formed through statistical experimentation and regression. These approximation models are a function of design variables common to the system level as well as other disciplines. For aircraft, it is proposed that synthesis/sizing is the proper avenue for integrating multiple disciplines. Research hypotheses are translated into a structured method, which is subsequently tested for validity. Specifically, the implementation involves the study of the relaxed static stability technology for a supersonic commercial transport aircraft. The probabilistic robust design method is exercised resulting in a series of robust design solutions based on different interpretations of "robustness". Insightful results are obtained and the ability of the method to expose trends in the design space are noted as a key advantage.
Divertor target shape optimization in realistic edge plasma geometry
NASA Astrophysics Data System (ADS)
Dekeyser, W.; Reiter, D.; Baelmans, M.
2014-07-01
Tokamak divertor design for next-step fusion reactors heavily relies on numerical simulations of the plasma edge. Currently, the design process is mainly done in a forward approach, where the designer is strongly guided by his experience and physical intuition in proposing divertor shapes, which are then thoroughly assessed by numerical computations. On the other hand, automated design methods based on optimization have proven very successful in the related field of aerodynamic design. By recasting design objectives and constraints into the framework of a mathematical optimization problem, efficient forward-adjoint based algorithms can be used to automatically compute the divertor shape which performs the best with respect to the selected edge plasma model and design criteria. In the past years, we have extended these methods to automated divertor target shape design, using somewhat simplified edge plasma models and geometries. In this paper, we build on and extend previous work to apply these shape optimization methods for the first time in more realistic, single null edge plasma and divertor geometry, as commonly used in current divertor design studies. In a case study with JET-like parameters, we show that the so-called one-shot method is very effective is solving divertor target design problems. Furthermore, by detailed shape sensitivity analysis we demonstrate that the development of the method already at the present state provides physically plausible trends, allowing to achieve a divertor design with an almost perfectly uniform power load for our particular choice of edge plasma model and design criteria.
A procedural method for the efficient implementation of full-custom VLSI designs
NASA Technical Reports Server (NTRS)
Belk, P.; Hickey, N.
1987-01-01
An imbedded language system for the layout of very large scale integration (VLSI) circuits is examined. It is shown that through the judicious use of this system, a large variety of circuits can be designed with circuit density and performance comparable to traditional full-custom design methods, but with design costs more comparable to semi-custom design methods. The high performance of this methodology is attributable to the flexibility of procedural descriptions of VLSI layouts and to a number of automatic and semi-automatic tools within the system.
He, Jianbo; Li, Jijie; Huang, Zhongwen; Zhao, Tuanjie; Xing, Guangnan; Gai, Junyi; Guan, Rongzhan
2015-01-01
Experimental error control is very important in quantitative trait locus (QTL) mapping. Although numerous statistical methods have been developed for QTL mapping, a QTL detection model based on an appropriate experimental design that emphasizes error control has not been developed. Lattice design is very suitable for experiments with large sample sizes, which is usually required for accurate mapping of quantitative traits. However, the lack of a QTL mapping method based on lattice design dictates that the arithmetic mean or adjusted mean of each line of observations in the lattice design had to be used as a response variable, resulting in low QTL detection power. As an improvement, we developed a QTL mapping method termed composite interval mapping based on lattice design (CIMLD). In the lattice design, experimental errors are decomposed into random errors and block-within-replication errors. Four levels of block-within-replication errors were simulated to show the power of QTL detection under different error controls. The simulation results showed that the arithmetic mean method, which is equivalent to a method under random complete block design (RCBD), was very sensitive to the size of the block variance and with the increase of block variance, the power of QTL detection decreased from 51.3% to 9.4%. In contrast to the RCBD method, the power of CIMLD and the adjusted mean method did not change for different block variances. The CIMLD method showed 1.2- to 7.6-fold higher power of QTL detection than the arithmetic or adjusted mean methods. Our proposed method was applied to real soybean (Glycine max) data as an example and 10 QTLs for biomass were identified that explained 65.87% of the phenotypic variation, while only three and two QTLs were identified by arithmetic and adjusted mean methods, respectively.
First-order reliability application and verification methods for semistatic structures
NASA Astrophysics Data System (ADS)
Verderaime, V.
1994-11-01
Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored in conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments; stress audits are shown to be arbitrary and incomplete, and the concept compromises the performance of high-strength materials. A reliability method is proposed that combines first-order reliability principles with deterministic design variables and conventional test techniques to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety-index expression. The application is reduced to solving for a design factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this design factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and the development of semistatic structural designs.
Control/structure interaction conceptual design tool
NASA Technical Reports Server (NTRS)
Briggs, Hugh C.
1990-01-01
The JPL Control/Structure Interaction Program is developing new analytical methods for designing micro-precision spacecraft with controlled structures. One of these, the Conceptual Design Tool, will illustrate innovative new approaches to the integration of multi-disciplinary analysis and design methods. The tool will be used to demonstrate homogeneity of presentation, uniform data representation across analytical methods, and integrated systems modeling. The tool differs from current 'integrated systems' that support design teams most notably in its support for the new CSI multi-disciplinary engineer. The design tool will utilize a three dimensional solid model of the spacecraft under design as the central data organization metaphor. Various analytical methods, such as finite element structural analysis, control system analysis, and mechanical configuration layout, will store and retrieve data from a hierarchical, object oriented data structure that supports assemblies of components with associated data and algorithms. In addition to managing numerical model data, the tool will assist the designer in organizing, stating, and tracking system requirements.
A computer program for the design and analysis of low-speed airfoils
NASA Technical Reports Server (NTRS)
Eppler, R.; Somers, D. M.
1980-01-01
A conformal mapping method for the design of airfoils with prescribed velocity distribution characteristics, a panel method for the analysis of the potential flow about given airfoils, and a boundary layer method have been combined. With this combined method, airfoils with prescribed boundary layer characteristics can be designed and airfoils with prescribed shapes can be analyzed. All three methods are described briefly. The program and its input options are described. A complete listing is given as an appendix.
A novel method for inverse fiber Bragg grating structure design
NASA Astrophysics Data System (ADS)
Yin, Yu-zhe; Chen, Xiang-fei; Dai, Yi-tang; Xie, Shi-zhong
2003-12-01
A novel grating inverse design method is proposed in this paper, which is direct in physical meaning and easy to accomplish. The key point of the method is design and implement desired spectra response in grating strength modulation domain, while not in grating period chirp domain. Simulated results are in good coincidence with design target. By transforming grating period chirp to grating strength modulation, a novel grating with opposite dispersion characters is proposed.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-22
... for implementation and continued use of a maintenance method that is designed to maintain pavement... method that is designed to maintain pavement marking retroreflectivity at or above the established levels... every pavement marking in its jurisdiction. Instead, agencies must implement methods designed to provide...
ERIC Educational Resources Information Center
Cameron, Leanne
2017-01-01
This paper reports on the learning designs, teaching methods and activities most commonly employed within the disciplines in six universities in Australia. The study sought to establish if there were significant differences between the disciplines in learning designs, teaching methods and teaching activities in the current Australian context, as…
An analytical method for designing low noise helicopter transmissions
NASA Technical Reports Server (NTRS)
Bossler, R. B., Jr.; Bowes, M. A.; Royal, A. C.
1978-01-01
The development and experimental validation of a method for analytically modeling the noise mechanism in the helicopter geared power transmission systems is described. This method can be used within the design process to predict interior noise levels and to investigate the noise reducing potential of alternative transmission design details. Examples are discussed.
Limitations of the method of characteristics when applied to axisymmetric hypersonic nozzle design
NASA Technical Reports Server (NTRS)
Edwards, Anne C.; Perkins, John N.; Benton, James R.
1990-01-01
A design study of axisymmetric hypersonic wind tunnel nozzles was initiated by NASA Langley Research Center with the objective of improving the flow quality of their ground test facilities. Nozzles for Mach 6 air, Mach 13.5 nitrogen, and Mach 17 nitrogen were designed using the Method of Characteristics/Boundary Layer (MOC/BL) approach and were analyzed with a Navier-Stokes solver. Results of the analysis agreed well with design for the Mach 6 case, but revealed oblique shock waves of increasing strength originating from near the inflection point of the Mach 13.5 and Mach 17 nozzles. The findings indicate that the MOC/BL design method has a fundamental limitation that occurs at some Mach number between 6 an 13.5. In order to define the limitation more exactly and attempt to discover the cause, a parametric study of hypersonic ideal air nozzles designed with the current MOC/BL method was done. Results of this study indicate that, while stagnations conditions have a moderate affect on the upper limit of the method, the method fails at Mach numbers above 8.0.
An intelligent, knowledge-based multiple criteria decision making advisor for systems design
NASA Astrophysics Data System (ADS)
Li, Yongchang
In systems engineering, design and operation of systems are two main problems which always attract researcher's attentions. The accomplishment of activities in these problems often requires proper decisions to be made so that the desired goal can be achieved, thus, decision making needs to be carefully fulfilled in the design and operation of systems. Design is a decision making process which permeates through out the design process, and is at the core of all design activities. In modern aircraft design, more and more attention is paid to the conceptual and preliminary design phases so as to increase the odds of choosing a design that will ultimately be successful at the completion of the design process, therefore, decisions made during these early design stages play a critical role in determining the success of a design. Since aerospace systems are complex systems with interacting disciplines and technologies, the Decision Makers (DMs) dealing with such design problems are involved in balancing the multiple, potentially conflicting attributes/criteria, transforming a large amount of customer supplied guidelines into a solidly defined set of requirement definitions. Thus, one could state with confidence that modern aerospace system design is a Multiple Criteria Decision Making (MCDM) process. A variety of existing decision making methods are available to deal with this type of decision problems. The selection of the most appropriate decision making method is of particular importance since inappropriate decision methods are likely causes of misleading engineering design decisions. With no sufficient knowledge about each of the methods, it is usually difficult for the DMs to find an appropriate analytical model capable of solving their problems. In addition, with the complexity of the decision problem and the demand for more capable methods increasing, new decision making methods are emerging with time. These various methods exacerbate the difficulty of the selection of an appropriate decision making method. Furthermore, some DMs may be exclusively using one or two specific methods which they are familiar with or trust and not realizing that they may be inappropriate to handle certain classes of the problems, thus yielding erroneous results. These issues reveal that in order to ensure a good decision a suitable decision method should be chosen before the decision making process proceeds. The first part of this dissertation proposes an MCDM process supported by an intelligent, knowledge-based advisor system referred to as Multi-Criteria Interactive Decision-Making Advisor and Synthesis process (MIDAS), which is able to facilitate the selection of the most appropriate decision making method and which provides insight to the user for fulfilling different preferences. The second part of this dissertation presents an autonomous decision making advisor which is capable of dealing with ever-evolving real time information and making autonomous decisions under uncertain conditions. The advisor encompasses a Markov Decision Process (MDP) formulation which takes uncertainty into account when determines the best action for each system state. (Abstract shortened by UMI.)
Patel, Prinesh N; Karakam, Vijaya Saradhi; Samanthula, Gananadhamu; Ragampeta, Srinivas
2015-10-01
Quality-by-design-based methods hold greater level of confidence for variations and greater success in method transfer. A quality-by-design-based ultra high performance liquid chromatography method was developed for the simultaneous assay of sumatriptan and naproxen along with their related substances. The first screening was performed by fractional factorial design comprising 44 experiments for reversed-phase stationary phases, pH, and organic modifiers. The results of screening design experiments suggested phenyl hexyl column and acetonitrile were the best combination. The method was further optimized for flow rate, temperature, and gradient time by experimental design of 20 experiments and the knowledge space was generated for effect of variable on response (number of peaks ≥ 1.50 - resolution). Proficient design space was generated from knowledge space by applying Monte Carlo simulation to successfully integrate quantitative robustness metrics during optimization stage itself. The final method provided the robust performance which was verified and validated. Final conditions comprised Waters® Acquity phenyl hexyl column with gradient elution using ammonium acetate (pH 4.12, 0.02 M) buffer and acetonitrile at 0.355 mL/min flow rate and 30°C. The developed method separates all 13 analytes within a 15 min run time with fewer experiments compared to the traditional quality-by-testing approach. ©2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Zhong, Yi; Gross, Herbert
2017-05-01
Freeform surfaces play important roles in improving the imaging performance of off-axis optical systems. However, for some systems with high requirements in specifications, the structure of the freeform surfaces could be very complicated and the number of freeform surfaces could be large. That brings challenges in fabrication and increases the cost. Therefore, to achieve a good initial system with minimum aberrations and reasonable structure before implementing freeform surfaces is essential for optical designers. The already existing initial system design methods are limited to certain types of systems. A universal tool or method to achieve a good initial system efficiently is very important. In this paper, based on the Nodal aberration theory and the system design method using Gaussian Brackets, the initial system design method is extended from rotationally symmetric systems to general non-rotationally symmetric systems. The design steps are introduced and on this basis, two off-axis three-mirror systems are pre-designed using spherical shape surfaces. The primary aberrations are minimized using the nonlinear least-squares solver. This work provides insight and guidance for initial system design of off-axis mirror systems.
An improved design method of a tuned mass damper for an in-service footbridge
NASA Astrophysics Data System (ADS)
Shi, Weixing; Wang, Liangkun; Lu, Zheng
2018-03-01
Tuned mass damper (TMD) has a wide range of applications in the vibration control of footbridges. However, the traditional engineering design method may lead to a mistuned TMD. In this paper, an improved TMD design method based on the model updating is proposed. Firstly, the original finite element model (FEM) is studied and the natural characteristics of the in-service or newly built footbridge is identified by field test, and then the original FEM is updated. TMD is designed according to the new updated FEM, and it is optimized according to the simulation on vibration control effects. Finally, the installation and field measurement of TMD are carried out. The improved design method can be applied to both in-service and newly built footbridges. This paper illustrates the improved design method with an engineering example. The frequency identification results of field test and original FEM show that there is a relatively large difference between them. The TMD designed according to the updated FEM has better vibration control effect than the TMD designed according to the original FEM. The site test results show that TMD has good effect on controlling human-induced vibrations.
Design optimization of natural laminar flow bodies in compressible flow
NASA Technical Reports Server (NTRS)
Dodbele, Simha S.
1992-01-01
An optimization method has been developed to design axisymmetric body shapes such as fuselages, nacelles, and external fuel tanks with increased transition Reynolds numbers in subsonic compressible flow. The new design method involves a constraint minimization procedure coupled with analysis of the inviscid and viscous flow regions and linear stability analysis of the compressible boundary-layer. In order to reduce the computer time, Granville's transition criterion is used to predict boundary-layer transition and to calculate the gradients of the objective function, and linear stability theory coupled with the e(exp n)-method is used to calculate the objective function at the end of each design iteration. Use of a method to design an axisymmetric body with extensive natural laminar flow is illustrated through the design of a tiptank of a business jet. For the original tiptank, boundary layer transition is predicted to occur at a transition Reynolds number of 6.04 x 10(exp 6). For the designed body shape, a transition Reynolds number of 7.22 x 10(exp 6) is predicted using compressible linear stability theory coupled with the e(exp n)-method.
A Proposal for the use of the Consortium Method in the Design-build system
NASA Astrophysics Data System (ADS)
Miyatake, Ichiro; Kudo, Masataka; Kawamata, Hiroyuki; Fueta, Toshiharu
In view of the necessity for efficient implementation of public works projects, it is expected to utilize advanced technical skills of private firms, for the purpose of reducing project costs, improving performance and functions of construction objects, and reducing work periods, etc. The design-build system is a method to order design and construction as a single contract, including design of structural forms and main specifications of the construction object. This is a system in which high techniques of private firms can be utilized, as a means to ensure qualities of design and construction, rational design, and efficiency of the project. The objective of this study is to examine the use of a method to form a consortium of civil engineering consultants and construction companies, as it is an issue related to the implementation of the design-build method. Furthermore, by studying various forms of consortiums to be introduced in future, it proposes procedural items required to utilize this method, during the bid and after signing a contract, such as the estimate submission from the civil engineering consultants etc.
A new statistical method for design and analyses of component tolerance
NASA Astrophysics Data System (ADS)
Movahedi, Mohammad Mehdi; Khounsiavash, Mohsen; Otadi, Mahmood; Mosleh, Maryam
2017-03-01
Tolerancing conducted by design engineers to meet customers' needs is a prerequisite for producing high-quality products. Engineers use handbooks to conduct tolerancing. While use of statistical methods for tolerancing is not something new, engineers often use known distributions, including the normal distribution. Yet, if the statistical distribution of the given variable is unknown, a new statistical method will be employed to design tolerance. In this paper, we use generalized lambda distribution for design and analyses component tolerance. We use percentile method (PM) to estimate the distribution parameters. The findings indicated that, when the distribution of the component data is unknown, the proposed method can be used to expedite the design of component tolerance. Moreover, in the case of assembled sets, more extensive tolerance for each component with the same target performance can be utilized.
Neuhauser, Linda; Kreps, Gary L; Morrison, Kathleen; Athanasoulis, Marcos; Kirienko, Nikolai; Van Brunt, Deryk
2013-08-01
This paper describes how design science theory and methods and use of artificial intelligence (AI) components can improve the effectiveness of health communication. We identified key weaknesses of traditional health communication and features of more successful eHealth/AI communication. We examined characteristics of the design science paradigm and the value of its user-centered methods to develop eHealth/AI communication. We analyzed a case example of the participatory design of AI components in the ChronologyMD project intended to improve management of Crohn's disease. eHealth/AI communication created with user-centered design shows improved relevance to users' needs for personalized, timely and interactive communication and is associated with better health outcomes than traditional approaches. Participatory design was essential to develop ChronologyMD system architecture and software applications that benefitted patients. AI components can greatly improve eHealth/AI communication, if designed with the intended audiences. Design science theory and its iterative, participatory methods linked with traditional health communication theory and methods can create effective AI health communication. eHealth/AI communication researchers, developers and practitioners can benefit from a holistic approach that draws from theory and methods in both design sciences and also human and social sciences to create successful AI health communication. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
A Matrix-Free Algorithm for Multidisciplinary Design Optimization
NASA Astrophysics Data System (ADS)
Lambe, Andrew Borean
Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.
A Matrix-Free Algorithm for Multidisciplinary Design Optimization
NASA Astrophysics Data System (ADS)
Lambe, Andrew Borean
Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.
An Engineering Method of Civil Jet Requirements Validation Based on Requirements Project Principle
NASA Astrophysics Data System (ADS)
Wang, Yue; Gao, Dan; Mao, Xuming
2018-03-01
A method of requirements validation is developed and defined to meet the needs of civil jet requirements validation in product development. Based on requirements project principle, this method will not affect the conventional design elements, and can effectively connect the requirements with design. It realizes the modern civil jet development concept, which is “requirement is the origin, design is the basis”. So far, the method has been successfully applied in civil jet aircraft development in China. Taking takeoff field length as an example, the validation process and the validation method of the requirements are detailed introduced in the study, with the hope of providing the experiences to other civil jet product design.
A rapid method for soil cement design : Louisiana slope value method.
DOT National Transportation Integrated Search
1964-03-01
The current procedure used by the Louisiana Department of Highways for laboratory design of cement stabilized soil base and subbase courses is taken from standard AASHO test methods, patterned after Portland Cement Association criteria. These methods...
NASA Technical Reports Server (NTRS)
Townsend, J.; Meyers, C.; Ortega, R.; Peck, J.; Rheinfurth, M.; Weinstock, B.
1993-01-01
Probabilistic structural analyses and design methods are steadily gaining acceptance within the aerospace industry. The safety factor approach to design has long been the industry standard, and it is believed by many to be overly conservative and thus, costly. A probabilistic approach to design may offer substantial cost savings. This report summarizes several probabilistic approaches: the probabilistic failure analysis (PFA) methodology developed by Jet Propulsion Laboratory, fast probability integration (FPI) methods, the NESSUS finite element code, and response surface methods. Example problems are provided to help identify the advantages and disadvantages of each method.
Methods for Combining Payload Parameter Variations with Input Environment
NASA Technical Reports Server (NTRS)
Merchant, D. H.; Straayer, J. W.
1975-01-01
Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occuring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular value of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the methods are also presented.
General design method for three-dimensional potential flow fields. 1: Theory
NASA Technical Reports Server (NTRS)
Stanitz, J. D.
1980-01-01
A general design method was developed for steady, three dimensional, potential, incompressible or subsonic-compressible flow. In this design method, the flow field, including the shape of its boundary, was determined for arbitrarily specified, continuous distributions of velocity as a function of arc length along the boundary streamlines. The method applied to the design of both internal and external flow fields, including, in both cases, fields with planar symmetry. The analytic problems associated with stagnation points, closure of bodies in external flow fields, and prediction of turning angles in three dimensional ducts were reviewed.
Culture, Interface Design, and Design Methods for Mobile Devices
NASA Astrophysics Data System (ADS)
Lee, Kun-Pyo
Aesthetic differences and similarities among cultures are obviously one of the very important issues in cultural design. However, ever since products became knowledge-supporting tools, the visible elements of products have become more universal so that the invisible parts of products such as interface and interaction are getting more important. Therefore, the cultural design should be extended to the invisible elements of culture like people's conceptual models beyond material and phenomenal culture. This chapter aims to explain how we address the invisible cultural elements in interface design and design methods by exploring the users' cognitive styles and communication patterns in different cultures. Regarding cultural interface design, we examined users' conceptual models while interacting with mobile phone and website interfaces, and observed cultural difference in performing tasks and viewing patterns, which appeared to agree with cultural cognitive styles known as Holistic thoughts vs. Analytic thoughts. Regarding design methods for culture, we explored how to localize design methods such as focus group interview and generative session for specific cultural groups, and the results of comparative experiments revealed cultural difference on participants' behaviors and performance in each design method and led us to suggest how to conduct them in East Asian culture. Mobile Observation Analyzer and Wi-Pro, user research tools we invented to capture user behaviors and needs especially in their mobile context, were also introduced.
Using a mixed-methods design to examine nurse practitioner integration in British Columbia.
Sangster-Gormley, Esther; Griffith, Janessa; Schreiber, Rita; Borycki, Elizabeth
2015-07-01
To discuss and provide examples of how mixed-methods research was used to evaluate the integration of nurse practitioners (NPs) into a Canadian province. Legislation enabling NPs to practise in British Columbia (BC) was enacted in 2005. This research evaluated the integration of NPs and their effect on the BC healthcare system. Data were collected using surveys, focus groups, participant interviews and case studies over three years. Data sources and methods were triangulated to determine how the findings addressed the research questions. The challenges and benefits of using the multiphase design are highlighted in the paper. The multiphase mixed-methods research design was selected because of its applicability to evaluation research. The design proved to be robust and flexible in answering research questions. As sub-studies within the multiphase design are often published separately, it can be difficult for researchers to find examples. This paper highlights ways that a multiphase mixed-methods design can be conducted for researchers unfamiliar with the process.
Design of transonic airfoil sections using a similarity theory
NASA Technical Reports Server (NTRS)
Nixon, D.
1978-01-01
A study of the available methods for transonic airfoil and wing design indicates that the most powerful technique is the numerical optimization procedure. However, the computer time for this method is relatively large because of the amount of computation required in the searches during optimization. The optimization method requires that base and calibration solutions be computed to determine a minimum drag direction. The design space is then computationally searched in this direction; it is these searches that dominate the computation time. A recent similarity theory allows certain transonic flows to be calculated rapidly from the base and calibration solutions. In this paper the application of the similarity theory to design problems is examined with the object of at least partially eliminating the costly searches of the design optimization method. An example of an airfoil design is presented.
Minimizing student’s faults in determining the design of experiment through inquiry-based learning
NASA Astrophysics Data System (ADS)
Nilakusmawati, D. P. E.; Susilawati, M.
2017-10-01
The purpose of this study were to describe the used of inquiry method in an effort to minimize student’s fault in designing an experiment and to determine the effectiveness of the implementation of the inquiry method in minimizing student’s faults in designing experiments on subjects experimental design. This type of research is action research participants, with a model of action research design. The data source were students of the fifth semester who took a subject of experimental design at Mathematics Department, Faculty of Mathematics and Natural Sciences, Udayana University. Data was collected through tests, interviews, and observations. The hypothesis was tested by t-test. The result showed that the implementation of inquiry methods to minimize of students fault in designing experiments, analyzing experimental data, and interpret them in cycle 1 students can reduce fault by an average of 10.5%. While implementation in Cycle 2, students managed to reduce fault by an average of 8.78%. Based on t-test results can be concluded that the inquiry method effectively used to minimize of student’s fault in designing experiments, analyzing experimental data, and interpreting them. The nature of the teaching materials on subject of Experimental Design that demand the ability of students to think in a systematic, logical, and critical in analyzing the data and interpret the test cases makes the implementation of this inquiry become the proper method. In addition, utilization learning tool, in this case the teaching materials and the students worksheet is one of the factors that makes this inquiry method effectively minimizes of student’s fault when designing experiments.
AI/OR computational model for integrating qualitative and quantitative design methods
NASA Technical Reports Server (NTRS)
Agogino, Alice M.; Bradley, Stephen R.; Cagan, Jonathan; Jain, Pramod; Michelena, Nestor
1990-01-01
A theoretical framework for integrating qualitative and numerical computational methods for optimally-directed design is described. The theory is presented as a computational model and features of implementations are summarized where appropriate. To demonstrate the versatility of the methodology we focus on four seemingly disparate aspects of the design process and their interaction: (1) conceptual design, (2) qualitative optimal design, (3) design innovation, and (4) numerical global optimization.
Structural Optimization for Reliability Using Nonlinear Goal Programming
NASA Technical Reports Server (NTRS)
El-Sayed, Mohamed E.
1999-01-01
This report details the development of a reliability based multi-objective design tool for solving structural optimization problems. Based on two different optimization techniques, namely sequential unconstrained minimization and nonlinear goal programming, the developed design method has the capability to take into account the effects of variability on the proposed design through a user specified reliability design criterion. In its sequential unconstrained minimization mode, the developed design tool uses a composite objective function, in conjunction with weight ordered design objectives, in order to take into account conflicting and multiple design criteria. Multiple design criteria of interest including structural weight, load induced stress and deflection, and mechanical reliability. The nonlinear goal programming mode, on the other hand, provides for a design method that eliminates the difficulty of having to define an objective function and constraints, while at the same time has the capability of handling rank ordered design objectives or goals. For simulation purposes the design of a pressure vessel cover plate was undertaken as a test bed for the newly developed design tool. The formulation of this structural optimization problem into sequential unconstrained minimization and goal programming form is presented. The resulting optimization problem was solved using: (i) the linear extended interior penalty function method algorithm; and (ii) Powell's conjugate directions method. Both single and multi-objective numerical test cases are included demonstrating the design tool's capabilities as it applies to this design problem.
Controller design via structural reduced modeling by FETM
NASA Technical Reports Server (NTRS)
Yousuff, A.
1986-01-01
The Finite Element - Transfer Matrix (FETM) method has been developed to reduce the computations involved in analysis of structures. This widely accepted method, however, has certain limitations, and does not directly produce reduced models for control design. To overcome these shortcomings, a modification of FETM method has been developed. The modified FETM method easily produces reduced models that are tailored toward subsequent control design. Other features of this method are its ability to: (1) extract open loop frequencies and mode shapes with less computations, (2) overcome limitations of the original FETM method, and (3) simplify the procedures for output feedback, constrained compensation, and decentralized control. This semi annual report presents the development of the modified FETM, and through an example, illustrates its applicability to an output feedback and a decentralized control design.
Simultaneous Aerodynamic and Structural Design Optimization (SASDO) for a 3-D Wing
NASA Technical Reports Server (NTRS)
Gumbert, Clyde R.; Hou, Gene J.-W.; Newman, Perry A.
2001-01-01
The formulation and implementation of an optimization method called Simultaneous Aerodynamic and Structural Design Optimization (SASDO) is shown as an extension of the Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) method. It is extended by the inclusion of structure element sizing parameters as design variables and Finite Element Method (FEM) analysis responses as constraints. The method aims to reduce the computational expense. incurred in performing shape and sizing optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, FEM structural analysis and sensitivity analysis tools. SASDO is applied to a simple. isolated, 3-D wing in inviscid flow. Results show that the method finds the saine local optimum as a conventional optimization method with some reduction in the computational cost and without significant modifications; to the analysis tools.
Incorporating Servqual-QFD with Taguchi Design for optimizing service quality design
NASA Astrophysics Data System (ADS)
Arbi Hadiyat, M.
2018-03-01
Deploying good service design in service companies has been updated issue in improving customer satisfaction, especially based on the level of service quality measured by Parasuraman’s SERVQUAL. Many researchers have been proposing methods in designing the service, and some of them are based on engineering viewpoint, especially by implementing the QFD method or even using robust Taguchi method. The QFD method would found the qualitative solution by generating the “how’s”, while Taguchi method gives more quantitative calculation in optimizing best solution. However, incorporating both QFD and Taguchi has been done in this paper and yields better design process. The purposes of this research is to evaluate the incorporated methods by implemented it to a case study, then analyze the result and see the robustness of those methods to customer perception of service quality. Started by measuring service attributes using SERVQUAL and find the improvement with QFD, the deployment of QFD solution then generated by defining Taguchi factors levels and calculating the Signal-to-noise ratio in its orthogonal array, and optimized Taguchi response then found. A case study was given for designing service in local bank. Afterward, the service design obtained from previous analysis was then evaluated and shows that it was still meet the customer satisfaction. Incorporating QFD and Taguchi has performed well and can be adopted and developed for another research for evaluating the robustness of result.
Results of an integrated structure-control law design sensitivity analysis
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1988-01-01
Next generation air and space vehicle designs are driven by increased performance requirements, demanding a high level of design integration between traditionally separate design disciplines. Interdisciplinary analysis capabilities have been developed, for aeroservoelastic aircraft and large flexible spacecraft control for instance, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changess in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient that finite difference methods for the computation of the equivalent sensitivity information.
Optimum structural design with plate bending elements - A survey
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Prasad, B.
1981-01-01
A survey is presented of recently published papers in the field of optimum structural design of plates, largely with respect to the minimum-weight design of plates subject to such constraints as fundamental frequency maximization. It is shown that, due to the availability of powerful computers, the trend in optimum plate design is away from methods tailored to specific geometry and loads and toward methods that can be easily programmed for any kind of plate, such as finite element methods. A corresponding shift is seen in optimization from variational techniques to numerical optimization algorithms. Among the topics covered are fully stressed design and optimality criteria, mathematical programming, smooth and ribbed designs, design against plastic collapse, buckling constraints, and vibration constraints.
Assessing Adaptive Instructional Design Tools and Methods in ADAPT[IT].
ERIC Educational Resources Information Center
Eseryel, Deniz; Spector, J. Michael
ADAPT[IT] (Advanced Design Approach for Personalized Training - Interactive Tools) is a European project within the Information Society Technologies program that is providing design methods and tools to guide a training designer according to the latest cognitive science and standardization principles. ADAPT[IT] addresses users in two significantly…
Design of the laser acupuncture therapeutic instrument.
Li, Chengwei; Zhen, Huang
2006-01-01
Laser acupuncture is defined as the stimulation of traditional acupuncture points with low-intensity, non-thermal laser irradiation. It has been well applied in clinic since the 1970s; however, some traditional acupuncture manipulating methods still cannot be implemented in the design of this kind of instruments, such as lifting and thrusting manipulating method, and twisting and twirling manipulating method, which are the essential acupuncture method in traditional acupuncture. The objective of this work was to design and build a low cost portable laser acupuncture therapeutic instrument, which can implement the two essential acupuncture manipulating methods. Digital PID control theory is used to control the power of laser diode (LD), and to implement the lifting and thrusting manipulating method. Special optical system is designed to implement twisting and twirling manipulating method. M5P430 microcontroller system is used as the control centre of the instrument. The realization of lifting and thrusting manipulating method and twisting and twirling manipulating method are technological innovations in traditional acupuncture coming true in engineering.
Core Professionalism Education in Surgery: A Systematic Review
Sarıoğlu Büke, Akile; Karabilgin Öztürkçü, Özlem Sürel; Yılmaz, Yusuf; Sayek, İskender
2018-01-01
Background: Professionalism education is one of the major elements of surgical residency education. Aims: To evaluate the studies on core professionalism education programs in surgical professionalism education. Study Design: Systematic review. Methods: This systematic literature review was performed to analyze core professionalism programs for surgical residency education published in English with at least three of the following features: program developmental model/instructional design method, aims and competencies, methods of teaching, methods of assessment, and program evaluation model or method. A total of 27083 articles were retrieved using EBSCOHOST, PubMed, Science Direct, Web of Science, and manual search. Results: Eight articles met the selection criteria. The instructional design method was presented in only one article, which described the Analysis, Design, Development, Implementation, and Evaluation model. Six articles were based on the Accreditation Council for Graduate Medical Education criterion, although there was significant variability in content. The most common teaching method was role modeling with scenario- and case-based learning. A wide range of assessment methods for evaluating professionalism education were reported. The Kirkpatrick model was reported in one article as a method for program evaluation. Conclusion: It is suggested that for a core surgical professionalism education program, developmental/instructional design model, aims and competencies, content, teaching methods, assessment methods, and program evaluation methods/models should be well defined, and the content should be comparable. PMID:29553464
Design method of large-diameter rock-socketed pile with steel casing
NASA Astrophysics Data System (ADS)
Liu, Ming-wei; Fang, Fang; Liang, Yue
2018-02-01
There is a lack of the design and calculation method of large-diameter rock-socketed pile with steel casing. Combined with the “twelfth five-year plan” of the National Science & Technology Pillar Program of China about “Key technologies on the ports and wharfs constructions of the mountain canalization channels”, this paper put forward the structured design requirements of concrete, steel bar distribution and steel casing, and a checking calculation method of the bearing capacity of the normal section of the pile and the maximum crack width at the bottom of the steel casing. The design method will have some degree of guiding significance for the design of large-diameter rock-socketed pile with steel casing.