Advanced search in
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
The following results are related to Digital Humanities and Cultural Heritage. Are you interested to view more results? Visit OpenAIRE - Explore.

  • Digital Humanities and Cultural Heritage
  • US
  • VTechWorks
  • Journal of Phycology

Date (most recent)
arrow_drop_down
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Shakir, Umair;

    My dissertation is about how engineering educators can use natural language processing (NLP) in implementing open-ended assessments in undergraduate engineering degree programs. Engineering students need to develop an ability to exercise judgment about better and worse outcomes of their decisions. One important consideration for improving engineering students' judgment involves creating sound educational assessments. Currently, engineering educators face a trad-off in selecting between open- and closed-ended assessments. Closed-ended assessments are easy to administer and score but are limited in what they measure given students are required, in many instances, to choose from a priori list. Conversely, open-ended assessments allow students to write their answers in any way they choose in their own words. However, open-ended assessments are likely to take more personal hours and lack consistency for both inter-grader and intra-grader grading. The solution to this challenge is the use of NLP. The working principles of the existing NLP models is the tallying of words, keyword matching, or syntactic similarity of words, which have often proved too brittle in capturing the language diversity that students could write. Therefore, the problem that motivated the present study is how to assess student responses based on underlying concepts and meanings instead of morphological characteristics or grammatical structure in sentences. Some of this problem can be addressed by developing NLP-assisted grading tools based on transformer-based large language models (TLLMs) such as BERT, MPNet, GPT-4. This is because TLLMs are trained on billions of words and have billions of parameters, thereby providing capacity to capture richer semantic representations of input text. Given the availability of TLLMs in the last five years, there is a significant lack of research related to integrating TLLMs in the assessment of open-ended engineering case studies. My dissertation study aims to fill this research gap. I developed and evaluated four NLP approaches based on TLLMs for thematic analysis of student responses to eight question prompts of engineering ethics and systems thinking case scenarios. The study's research design comprised the following steps. First, I developed an example bank for each question prompt with two procedures: (a) human-in-the-loop natural language processing (HILNLP) and (b) traditional qualitative coding. Second, I assigned labels using the example banks to unlabeled student responses with the two NLP techniques: (i) k-Nearest Neighbors (kNN), and (ii) Zero-Shot Classification (ZSC). Further, I utilized the following configurations of these NLP techniques: (i) kNN (when k=1), (ii) kNN (when k=3), (iii) ZSC (multi-labels=false), and (iv) ZSC (multi-labels=true). The kNN approach took input of both sentences and their labels from the example banks. On the other hand, the ZSC approach only took input of labels from the example bank. Third, I read each sentence or phrase along with the model's suggested label(s) to evaluate whether the assigned label represented the idea described in the sentence and assigned the following numerical ratings: accurate (1), neutral (0), and inaccurate (-1). Lastly, I used those numerical evaluation ratings to calculate accuracy of the NLP approaches. The results of my study showed moderate accuracy in thematically analyzing students' open-ended responses to two different engineering case scenarios. This is because no single method among the four NLP methods performed consistently better than the other methods across all question prompts. The highest accuracy rate varied between 53% and 92%, depending upon the question prompts and NLP methods. Despite these mixed results, this study accomplishes multiple goals. My dissertation demonstrates to community members that TLLMs have potential for positive impacts on improving classroom practices in engineering education. In doing so, my dissertation study takes up one aspect of instructional design: assessment of students' learning outcomes in engineering ethics and systems thinking skills. Further, my study derived important implications for practice in engineering education. First, I gave important lessons and guidelines for educators interested in incorporating NLP into their educational assessment. Second, the open-source code is uploaded to a GitHub repository, thereby making it more accessible to a larger group of users. Third, I gave suggestions for qualitative researchers on conducting NLP-assisted qualitative analysis of textual data. Overall, my study introduced state-of-the-art TLLM-based NLP approaches to a research field where it holds potential yet remains underutilized. This study can encourage engineering education researchers to utilize these NLP methods that may be helpful in analyzing the vast textual data generated in engineering education, thereby reducing the number of missed opportunities to glean information for actors and agents in engineering education. My dissertation is about how engineering educators can use natural language processing (NLP) in implementing open-ended assessments in undergraduate engineering degree programs. Engineering students need to develop an ability to exercise judgment about better and worse outcomes of their decisions. One important consideration for improving engineering students' judgment involves creating sound educational assessments. Currently, engineering educators face a trade-off in selecting between open- and closed-ended assessments. Closed-ended assessments are easy to administer and score but are limited in what they measure given students are required, in many instances, to choose from a priori list. Conversely, open-ended assessments allow students to write their answers in any way they choose in their own words. However, open-ended assessments are likely to take more personal hours and lack consistency for both inter-grader and intra-grader grading. The solution to this challenge is the use of NLP. The working principles of the existing NLP models are the tallying of words, keyword matching, or syntactic similarity of words, which have often proved too brittle in capturing the language diversity that students could write. Therefore, the problem that motivated the present study is how to assess student responses based on underlying concepts and meanings instead of morphological characteristics or grammatical structure in sentences. Some of this problem can be addressed by developing NLP-assisted grading tools based on transformer-based large language models (TLLMs). This is because TLLMs are trained on billions of words and have billions of parameters, thereby providing capacity to capture richer semantic representations of input text. Given the availability of TLLMs in the last five years, there is a significant lack of research related to integrating TLLMs in the assessment of open-ended engineering case studies. My dissertation study aims to fill this research gap. The results of my study showed moderate accuracy in thematically analyzing students' open-ended responses to two different engineering case scenarios. My dissertation demonstrates to community members that TLLMs have potential for positive impacts on improving classroom practices in engineering education. This study can encourage engineering education researchers to utilize these NLP methods that may be helpful in analyzing the vast textual data generated in engineering education, thereby reducing the number of missed opportunities to glean information for actors and agents in engineering education. Doctor of Philosophy

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    VTechWorks
    Doctoral thesis . 2023
    License: CC BY NC
    Data sources: VTechWorks
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      VTechWorks
      Doctoral thesis . 2023
      License: CC BY NC
      Data sources: VTechWorks
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Allen, Amy E.; Kavanagh, Anne Marie; ni Cassaithe, Caitriona;

    Accepted version

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    VTechWorks
    Part of book or chapter of book . 2023
    Data sources: VTechWorks
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      VTechWorks
      Part of book or chapter of book . 2023
      Data sources: VTechWorks
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Vikram Mohanty; Kurt Luther;

    Historical photos are valuable for their cultural and economic significance, but can be difficult to identify accurately due to various challenges such as low-quality images, lack of corroborating evidence, and limited research resources. Misidentified photos can have significant negative consequences, including lost economic value, incorrect historical records, and the spread of misinformation that can lead to perpetuating conspiracy theories. To accurately assess the credibility of a photo identification (ID), it may be necessary to conduct investigative research, use domain knowledge, and consult experts. In this paper, we introduce DoubleCheck, a quality assessment framework for verifying historical photo IDs on Civil War Photo Sleuth (CWPS), a popular online platform for identifying American Civil War-era photos using facial recognition and crowdsourcing. DoubleCheck focuses on improving CWPS's user experience and system architecture to display information useful for assessing the quality of historical photo IDs on CWPS. In a mixed-methods evaluation of DoubleCheck, we found that users contributed a wide diversity of sources for photo IDs, which helped facilitate the community's assessment of these IDs through DoubleCheck's provenance visualizations. Further, DoubleCheck's quality assessment badges and visualizations supported users in making accurate assessments of photo IDs, even in cases involving ID conflicts. Comment: Accepted to ACM Journal on Computing and Cultural Heritage (JOCCH)

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    VTechWorks
    Other literature type . 2023
    Data sources: VTechWorks
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    arXiv.org e-Print Archive
    Other literature type . Preprint . 2023
    https://doi.org/10.48550/arxiv...
    Article . 2023
    License: CC BY NC ND
    Data sources: Datacite
    image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
    VTechWorks
    Other literature type . 2023
    Data sources: VTechWorks
    Journal on Computing and Cultural Heritage
    Article . 2023 . Peer-reviewed
    Data sources: Crossref
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    Access Routes
    Green
    bronze
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      VTechWorks
      Other literature type . 2023
      Data sources: VTechWorks
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      arXiv.org e-Print Archive
      Other literature type . Preprint . 2023
      https://doi.org/10.48550/arxiv...
      Article . 2023
      License: CC BY NC ND
      Data sources: Datacite
      image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
      VTechWorks
      Other literature type . 2023
      Data sources: VTechWorks
      Journal on Computing and Cultural Heritage
      Article . 2023 . Peer-reviewed
      Data sources: Crossref
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Lei, Shuo;

    Recent advances in large neural network-style models have demonstrated great performance in various applications, such as image generation, question answering, and audio classification. However, these deep and high-capacity models require a large amount of labeled data to function properly, rendering them inapplicable in many real-world scenarios. This dissertation focuses on the development and evaluation of advanced machine learning algorithms to solve the following research questions: (1) How to learn novel classes with limited labeled data, (2) How to adapt a large pre-trained model to the target domain if only unlabeled data is available, (3) How to boost the performance of the few-shot learning model with unlabeled data, and (4) How to utilize limited labeled data to learn new classes without the training data in the same domain. First, we study few-shot learning in text classification tasks. Meta-learning is becoming a popular approach for addressing few-shot text classification and has achieved state-of-the-art performance. However, the performance of existing approaches heavily depends on the interclass variance of the support set. To address this problem, we propose a TART network for few-shot text classification. The model enhances the generalization by transforming the class prototypes to per-class fixed reference points in task-adaptive metric spaces. In addition, we design a novel discriminative reference regularization to maximize divergence between transformed prototypes in task-adaptive metric spaces to improve performance further. In the second problem we focus on self-learning in cross-lingual transfer task. Our goal here is to develop a framework that can make the pretrained cross-lingual model continue learning the knowledge with large amount of unlabeled data. Existing self-learning methods in crosslingual transfer tasks suffer from the large number of incorrectly pseudo-labeled samples used in the training phase. We first design an uncertainty-aware cross-lingual transfer framework with pseudo-partial-labels. We also propose a novel pseudo-partial-label estimation method that considers prediction confidences and the limitation to the number of candidate classes. Next, to boost the performance of the few-shot learning model with unlabeled data, we propose a semi-supervised approach for few-shot semantic segmentation task. Existing solutions for few-shot semantic segmentation cannot easily be applied to utilize image-level weak annotations. We propose a class-prototype augmentation method to enrich the prototype representation by utilizing a few image-level annotations, achieving superior performance in one-/multi-way and weak annotation settings. We also design a robust strategy with softmasked average pooling to handle the noise in image-level annotations, which considers the prediction uncertainty and employs the task-specific threshold to mask the distraction. Finally, we study the cross-domain few-shot learning in the semantic segmentation task. Most existing few-shot segmentation methods consider a setting where base classes are drawn from the same domain as the new classes. Nevertheless, gathering enough training data for meta-learning is either unattainable or impractical in many applications. We extend few-shot semantic segmentation to a new task, called Cross-Domain Few-Shot Semantic Segmentation (CD-FSS), which aims to generalize the meta-knowledge from domains with sufficient training labels to low-resource domains. Then, we establish a new benchmark for the CD-FSS task and evaluate both representative few-shot segmentation methods and transfer learning based methods on the proposed benchmark. We then propose a novel Pyramid-AnchorTransformation based few-shot segmentation network (PATNet), in which domain-specific features are transformed into domain-agnostic ones for downstream segmentation modules to fast adapt to unseen domains. Nowadays, deep learning techniques play a crucial role in our everyday existence. In addition, they are crucial to the success of many e-commerce and local businesses for enhancing data analytics and decision-making. Notable applications include intelligent transportation, intelligent healthcare, the generation of natural language, and intrusion detection, among others. To achieve reasonable performance on a new task, these deep and high-capacity models require thousands of labeled examples, which increases the data collection effort and computation costs associated with training a model. Moreover, in many disciplines, it might be difficult or even impossible to obtain data due to concerns such as privacy and safety. This dissertation focuses on learning with limited labeled data in natural language processing and computer vision tasks. To recognize novel classes with a few examples in text classification tasks, we develop a deep learning-based model that can capture both cross- task transferable knowledge and task-specific features. We also build an uncertainty-aware self-learning framework and a semi-supervised few-shot learning method, which allow us to boost the pre-trained model with easily accessible unlabeled data. In addition, we propose a cross-domain few-shot semantic segmentation method to generalize the model to different domains with a few examples. By handling these unique challenges in learning with limited labeled data and developing suitable approaches, we hope to improve the efficiency and generalization of deep learning methods in the real world. Doctor of Philosophy

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
    VTechWorks
    Doctoral thesis . 2023
    Data sources: VTechWorks
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
      VTechWorks
      Doctoral thesis . 2023
      Data sources: VTechWorks
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Ng, Wen Nie;

    The goal of the paper is to enhance the metadata standard of fashion collections by expanding the controlled vocabulary and metadata elements for Costume Core, a metadata schema designed specifically for fashion artifacts. Various techniques are employed to achieve this goal, including identifying new descriptors using word embedding similarity measurements and adding new descriptive terms for precise artifact descriptions to use when re-cataloging a university fashion collection in Costume Core. The paper also provides a sneak peek of the Model Output Confirmative Helper Application, which simplifies the vocabulary review process. Additionally, a survey was conducted to collect insights into how other fashion professionals use metadata when describing dress artifacts. The survey results reveal 1) commonly used metadata standards in the historic fashion domain; 2) sample metadata respondents use; and 3) partial potential metadata that can be appended to Costume Core, which is relevant to Virginia Tech’s Oris Glisson Historic Costume and Textile Collection. The expanded Costume Core resulting from the project offers a more comprehensive way of describing fashion collection holdings/artifacts. It has the potential to be adopted by the fashion collections to produce metadata that is findable, accessible, interoperable, and reusable. 1. Abstract was peer-reviewed 2. Slides and presentation were prepared and presented by Wen Nie Ng based on a previously published article, incorporating updates on ongoing projects that have stemmed from the grant: Smith, D., Ng, W. N., McIrvin, C., Miller, C., Spencer, J. “Comparative Study and Expansion of Metadata Standards for Historic Fashion Collections.” Visual Resources Association Bulletin 50, no. 1 (June 2023). https://online.vraweb.org/index.php/vrab/article/view/ 3. Educational component or the relevance to attendees: The information shared in this presentation may benefit a diverse range of individuals due to the widespread impact of fashion on culture and society, both historically and in the 21st century. Additionally, the general public composes the majority of online users of digital fashion collections. Therefore, extending the metadata schema to capture vocabulary likely used by online users will result in satisfactory searches on the user end. 4. Slides were created on Google Slides and exported as pptx file Virginia Tech University Libraries Collaborative Research Grant Summer 2022

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    VTechWorks
    Presentation . 2023
    License: CC BY NC SA
    Data sources: VTechWorks
    image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
    VTechWorks
    Presentation . 2023
    Data sources: VTechWorks
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      VTechWorks
      Presentation . 2023
      License: CC BY NC SA
      Data sources: VTechWorks
      image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
      VTechWorks
      Presentation . 2023
      Data sources: VTechWorks
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Gamieldien, Yasir;

    This dissertation is about using artificial intelligence (AI) to help researchers and teachers understand how students learn from their exams. Exams are not only a way to measure what students know, but also a chance for students to reflect on how they studied and what they can do better next time. One way that students can reflect is by using exam wrappers, which are short questions that students answer after they get their graded exams back. A type of AI called natural language processing (NLP) is used in this dissertation, which can analyze text and find patterns and meanings in it. This study also uses a powerful AI tool called GPT-3.5, which can generate text and answer questions. The dissertation has three manuscripts that compare the traditional way of analyzing exam wrappers, which is done by hand, with the new way of using NLP and GPT-3.5, evaluate a specific promising NLP method, and use this method to try and gain a deeper understanding in students self-regulated learning (SRL) while preparing for exams. The data comes from 3,800 exam wrappers from a physics course for engineering students. The first manuscript develops a way of using NLP and GPT-3.5 to find out what learning strategies and goals students talk about in their exam wrappers and compares it to more traditional methods of analysis. The second manuscript tests how accurate a specific NLP technique is in finding these strategies and goals. The third manuscript looks at how different students use different strategies and goals depending on how well they did on the exams using the NLP technique in the second manuscript. I found that NLP and GPT-3.5 can aid in analyzing exam wrappers faster and provide nuanced insights when compared with manual approaches. The dissertation also shows what learning strategies and goals are most discussed for engineering students as they prepare for exams. The dissertation gives some suggestions, challenges, and ideas for future research on AI and learning from exams. This dissertation explores the use of natural language processing (NLP) and large language models (LLMs) to analyze student self-regulated learning (SRL) strategies in response to exam wrappers. Exam wrappers are structured reflection activities that prompt students to practice SRL after they get their graded exams back. The dissertation consists of three manuscripts that compare traditional qualitative analysis with NLP-assisted approaches using transformer-based models including GPT-3.5, a state-of-the-art LLM. The data set comprises 3,800 student responses from an engineering physics course. The first manuscript develops two NLP-assisted codebooks for identifying learning strategies related to SRL in exam wrapper responses and evaluates the agreement between them and traditional qualitative analysis. The second manuscript applies a novel NLP technique called zero-shot learning (ZSL) to classify student responses into the codes developed in the first manuscript and assesses the accuracy of this method by evaluating a subset of the full dataset. The third manuscript identifies the distribution and differences of learning strategies and SRL constructs among students of different exam performance profiles using the results from the second manuscript. The dissertation demonstrates the potential of NLP and LLMs to enhance qualitative research by providing scalable, robust, and efficient methods for analyzing large corpora of textual data. The dissertation also contributes to the understanding of SRL in engineering education by revealing the common learning strategies, impediments, and SRL constructs that students report they use while preparing for exams in a first-year engineering physics course. The dissertation suggests implications, limitations, and directions for future research on NLP, LLMs, and SRL. Doctor of Philosophy

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
    VTechWorks
    Doctoral thesis . 2023
    Data sources: VTechWorks
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
      VTechWorks
      Doctoral thesis . 2023
      Data sources: VTechWorks
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Mohanty, Vikram;

    Identifying individuals in historical photographs is important for preserving material culture, correcting historical records, and adding economic value. Historians, antiques dealers, and collectors often rely on manual, time-consuming approaches. While Artificial Intelligence (AI) offers potential solutions, it's not widely adopted due to a lack of specialized tools and inherent inaccuracies and biases. In my dissertation, I address this gap by combining the complementary strengths of human intelligence and AI. I introduce Photo Sleuth, a novel person identification pipeline that combines crowdsourced expertise with facial recognition, supporting users in identifying unknown portraits from the American Civil War era (1861--65). Despite successfully identifying numerous unknown photos, users often face the `last-mile problem' --- selecting the correct match(es) from a shortlist of high-confidence facial recognition candidates while avoiding false positives. To assist experts, I developed Second Opinion, an online tool that employs a novel crowdsourcing workflow, inspired by cognitive psychology, effectively filtering out up to 75% of facial recognition's false positives. Yet, as AI models continually evolve, changes in the underlying model can potentially impact user experience in such crowd--expert--AI workflows. I conducted an online study to understand user perceptions of changes in facial recognition models, especially in the context of historical person identification. Our findings showed that while human-AI collaborations were effective in identifying photos, they also introduced false positives. To reduce these misidentifications, I built Photo Steward, an information stewardship architecture that employs a deliberative workflow for validating historical photo identifications. Building on this foundation, I introduced DoubleCheck, a quality assessment framework that combines community stewardship and comprehensive provenance information, for helping users accurately assess photo identification quality. Through my dissertation, I explore the design and deployment of human-AI collaborative tools, emphasizing the creation of sustainable online communities and workflows that foster accurate decision-making in the context of historical photo identification. Identifying historical photos offers significant cultural and economic value; however, the identification process can be complex and challenging due to factors like poor source material and limited research resources. In my dissertation, I address this problem by leveraging the complementary strengths of human intelligence and Artificial Intelligence (AI). I built Photo Sleuth, an online platform, that helps users in identifying unknown portraits from the American Civil War era. This platform employs a novel person identification workflow that combines crowdsourced human expertise and facial recognition. While AI-based facial recognition is effective at quickly scanning thousands of photos, it can sometimes present challenges. Specifically, it provides the human expert with a shortlist of highly similar-looking candidates from which the expert must discern the correct matches; I call this as the `last-mile problem' of person identification. To assist experts in navigating this challenge, I developed Second Opinion, a tool that employs a novel crowdsourcing workflow inspired by cognitive psychology, named seed-gather-analyze. Further, I conducted an online study to understand the influence of changes in the underlying facial recognition models on the downstream person identification tasks. While these tools enabled numerous successful identifications, they also occasionally led to misidentifications. To address this issue, I introduced Photo Steward, an information stewardship architecture that encourages deliberative decision-making while identifying photos. Building upon the principles of information stewardship and provenance, I then developed DoubleCheck, a quality assessment framework that presents pertinent information, aiding users in accurately evaluating the quality of historical photo IDs. Through my dissertation, I explore the design and deployment of human-AI collaborative tools, emphasizing the creation of sustainable online communities and workflows that encourage accurate decision-making in the context of historical photo identification. Doctor of Philosophy

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    VTechWorks
    Doctoral thesis . 2023
    License: CC BY NC ND
    Data sources: VTechWorks
    image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
    VTechWorks
    Doctoral thesis . 2023
    Data sources: VTechWorks
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      VTechWorks
      Doctoral thesis . 2023
      License: CC BY NC ND
      Data sources: VTechWorks
      image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
      VTechWorks
      Doctoral thesis . 2023
      Data sources: VTechWorks
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Janardhan Reddy, Rathvik;

    The architecture of tragedy is a complex and emotive topic that explores the ways in which design elements can be used to commemorate and remember significant events. This thesis aims to examine the role of architecture in the representation of tragedy, with a specific focus on how design elements such as light, shadow, materiality, and spatial arrangement can evoke emotions and tell a story. The thesis will begin by examining the historical context of architecture and tragedy, looking at examples from ancient civilizations to contemporary times. It will then move on to explore the ways in which tragedy has been represented in architecture, examining key design elements and their impact on the viewer. Case studies will illustrate how architecture has been used to commemorate tragedies such as the Holocaust, 9/11, and Fukushima disasters. The thesis will also explore the ethical implications of using architecture to represent tragedy, including questions about appropriateness, respect, and memory. It will examine the potential for architecture to create a sense of healing and closure for those affected by tragedy and the potential to be misused or exploited for political or commercia l gain. Ultimately, this thesis aims to comprehensively examine the relationship between architecture and tragedy, highlighting the importance of design elements in telling a story and commemorating significant events. It will explore the ways in which architecture can be used to create a sense of empathy and understanding while also acknowledging the complex ethical issues involved in representing tragedy through design. The relationship between architecture and tragedy has long been intertwined, serving as a means of expression, storytelling, and commemoration. The role of design elements such as light, shadow, materiality, and spatial arrangement in evoking emotions and telling a story has been significant in depicting tragedy in architecture. This thesis explores the ways in which architecture has been used to represent tragedy, examining key design elements and their impact on the viewer. Case studies, including the Holocaust Memorial in Berlin, the 9/11 Memorial in New York, and the Fukushima Memorial in Japan, illustrate how architecture has been used to commemorate and remember significant events. Master of Architecture

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
    VTechWorks
    Thesis . 2023
    Data sources: VTechWorks
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
      VTechWorks
      Thesis . 2023
      Data sources: VTechWorks
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Gunturi, Uma Sushmitha;

    Experiences of interpersonal racism persist as a prevalent reality for BIPOC (Black, Indigenous, People of Color) in the United States. One form of racism that often goes unnoticed is racial microaggressions. These are subtle acts of racism that leave victims questioning the intent of the aggressor. The line of offense is often unclear, as these acts are disguised through humor or seemingly harmless intentions. In this study, we analyze the language used in online racial microaggressions ("Acts") and compare it to personal narratives recounting experiences of such aggressions ("Recalls") by Black social media users. We curated a corpus of acts and recalls from social media discussions on platforms like Reddit and Tumblr. Additionally, we collaborated with Black participants in a workshop to hand-annotate and verify the corpus. Using natural language processing techniques and qualitative analysis, we examine the language underlying acts and recalls of racial microaggressions. Our goal is to understand the lexical patterns that differentiate the two in the context of racism in the U.S. Our findings indicate that neural language models can accurately classify acts and recalls, revealing contextual words that associate Blacks with objects that perpetuate negative stereotypes. We also observe overlapping linguistic signatures between acts and recalls, serving different purposes, which have implications for current challenges in social media content moderation systems. Racial Microaggressions are expressions of human biases that are subtly disguised. The differences in language and themes used in instances of Racial Microaggressions ("Acts") and the discussions addressing them ("Recalls") on online communities have made it difficult for researchers to automatically quantify and extract these differences. In this study, we introduce a tool that can effectively distinguish acts and recalls of microaggressions. We utilize Natural Language Processing techniques to classify and identify key distinctions in language usage and themes. Additionally, we employ qualitative methods and engage in workshop discussions with Black participants to interpret the classification results. Our findings reveal common linguistic patterns between acts and recalls that serve opposing purposes. Acts tend to stereotype and degrade Black people, while recalls seek to portray their discomfort and seek validation for their experiences. These findings highlight why recalls are often considered toxic in online communities. This also represents an initial step towards creating a socio-technical system that safeguards the experiences of racial minority groups. Master of Science

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    VTechWorks
    Thesis . 2023
    Data sources: VTechWorks
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    VTechWorks
    Thesis . 2023
    License: CC BY
    Data sources: VTechWorks
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      VTechWorks
      Thesis . 2023
      Data sources: VTechWorks
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      VTechWorks
      Thesis . 2023
      License: CC BY
      Data sources: VTechWorks
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Dodson, Terryl Dwayne;

    Historical photographs can generate significant cultural and economic value, but often their subjects go unidentified. However, if these photographs are analyzed correctly, clues in these photographs can open up new directions in identifying unknown subjects. For example, many 19th century photographs contain painted backdrops that can be mapped to a specific photographer or location, but this research process is often manual, time-consuming, and unsuccessful. Artificial Intelligence-based computer vision techniques could be used to automatically identify painted backdrops or photographers or group together photos with similar backdrops in order to aid researchers. However, it is unknown which computer vision techniques are feasible for painted backdrop identification or which techniques work better than others. We present three studies comparing four different types of computer vision techniques – Inception, CLIP, MAE, and pHash – across a variety of metrics. We find that a workflow that combines the CLIP computer vision technique, software that automatically classifies photo backgrounds, and simulated human feedback performs best. We also discuss implications for collaboration between humans and AI for analyzing images and new possibilities for academic research combining technology and history. Historical photographs can generate significant cultural and economic value, but often their subjects go unidentified. However, if analyzed correctly, visual clues in these photographs can open up new directions in identifying unknown subjects. For example, many 19th century photographs contain painted backdrops that can be mapped to a specific photographer or location, but this research process is often manual, time-consuming, and unsuccessful. AI-based computer vision algorithms could be used to automatically identify painted backdrops or photographers or cluster photos with similar backdrops in order to aid researchers. However, it is unknown which computer vision algorithms are feasible for painted backdrop identification or which techniques work better than others. We present three studies evaluating four different types of image embeddings – Inception, CLIP, MAE, and pHash – across a variety of metrics and techniques. We find that a workflow using CLIP embeddings combined with a background classifier and simulated user feedback performs best. We also discuss implications for human-AI collaboration in visual analysis and new possibilities for digital humanities scholarship. Master of Science

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
    VTechWorks
    Thesis . 2023
    Data sources: VTechWorks
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
      VTechWorks
      Thesis . 2023
      Data sources: VTechWorks
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
Advanced search in
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
The following results are related to Digital Humanities and Cultural Heritage. Are you interested to view more results? Visit OpenAIRE - Explore.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Shakir, Umair;

    My dissertation is about how engineering educators can use natural language processing (NLP) in implementing open-ended assessments in undergraduate engineering degree programs. Engineering students need to develop an ability to exercise judgment about better and worse outcomes of their decisions. One important consideration for improving engineering students' judgment involves creating sound educational assessments. Currently, engineering educators face a trad-off in selecting between open- and closed-ended assessments. Closed-ended assessments are easy to administer and score but are limited in what they measure given students are required, in many instances, to choose from a priori list. Conversely, open-ended assessments allow students to write their answers in any way they choose in their own words. However, open-ended assessments are likely to take more personal hours and lack consistency for both inter-grader and intra-grader grading. The solution to this challenge is the use of NLP. The working principles of the existing NLP models is the tallying of words, keyword matching, or syntactic similarity of words, which have often proved too brittle in capturing the language diversity that students could write. Therefore, the problem that motivated the present study is how to assess student responses based on underlying concepts and meanings instead of morphological characteristics or grammatical structure in sentences. Some of this problem can be addressed by developing NLP-assisted grading tools based on transformer-based large language models (TLLMs) such as BERT, MPNet, GPT-4. This is because TLLMs are trained on billions of words and have billions of parameters, thereby providing capacity to capture richer semantic representations of input text. Given the availability of TLLMs in the last five years, there is a significant lack of research related to integrating TLLMs in the assessment of open-ended engineering case studies. My dissertation study aims to fill this research gap. I developed and evaluated four NLP approaches based on TLLMs for thematic analysis of student responses to eight question prompts of engineering ethics and systems thinking case scenarios. The study's research design comprised the following steps. First, I developed an example bank for each question prompt with two procedures: (a) human-in-the-loop natural language processing (HILNLP) and (b) traditional qualitative coding. Second, I assigned labels using the example banks to unlabeled student responses with the two NLP techniques: (i) k-Nearest Neighbors (kNN), and (ii) Zero-Shot Classification (ZSC). Further, I utilized the following configurations of these NLP techniques: (i) kNN (when k=1), (ii) kNN (when k=3), (iii) ZSC (multi-labels=false), and (iv) ZSC (multi-labels=true). The kNN approach took input of both sentences and their labels from the example banks. On the other hand, the ZSC approach only took input of labels from the example bank. Third, I read each sentence or phrase along with the model's suggested label(s) to evaluate whether the assigned label represented the idea described in the sentence and assigned the following numerical ratings: accurate (1), neutral (0), and inaccurate (-1). Lastly, I used those numerical evaluation ratings to calculate accuracy of the NLP approaches. The results of my study showed moderate accuracy in thematically analyzing students' open-ended responses to two different engineering case scenarios. This is because no single method among the four NLP methods performed consistently better than the other methods across all question prompts. The highest accuracy rate varied between 53% and 92%, depending upon the question prompts and NLP methods. Despite these mixed results, this study accomplishes multiple goals. My dissertation demonstrates to community members that TLLMs have potential for positive impacts on improving classroom practices in engineering education. In doing so, my dissertation study takes up one aspect of instructional design: assessment of students' learning outcomes in engineering ethics and systems thinking skills. Further, my study derived important implications for practice in engineering education. First, I gave important lessons and guidelines for educators interested in incorporating NLP into their educational assessment. Second, the open-source code is uploaded to a GitHub repository, thereby making it more accessible to a larger group of users. Third, I gave suggestions for qualitative researchers on conducting NLP-assisted qualitative analysis of textual data. Overall, my study introduced state-of-the-art TLLM-based NLP approaches to a research field where it holds potential yet remains underutilized. This study can encourage engineering education researchers to utilize these NLP methods that may be helpful in analyzing the vast textual data generated in engineering education, thereby reducing the number of missed opportunities to glean information for actors and agents in engineering education. My dissertation is about how engineering educators can use natural language processing (NLP) in implementing open-ended assessments in undergraduate engineering degree programs. Engineering students need to develop an ability to exercise judgment about better and worse outcomes of their decisions. One important consideration for improving engineering students' judgment involves creating sound educational assessments. Currently, engineering educators face a trade-off in selecting between open- and closed-ended assessments. Closed-ended assessments are easy to administer and score but are limited in what they measure given students are required, in many instances, to choose from a priori list. Conversely, open-ended assessments allow students to write their answers in any way they choose in their own words. However, open-ended assessments are likely to take more personal hours and lack consistency for both inter-grader and intra-grader grading. The solution to this challenge is the use of NLP. The working principles of the existing NLP models are the tallying of words, keyword matching, or syntactic similarity of words, which have often proved too brittle in capturing the language diversity that students could write. Therefore, the problem that motivated the present study is how to assess student responses based on underlying concepts and meanings instead of morphological characteristics or grammatical structure in sentences. Some of this problem can be addressed by developing NLP-assisted grading tools based on transformer-based large language models (TLLMs). This is because TLLMs are trained on billions of words and have billions of parameters, thereby providing capacity to capture richer semantic representations of input text. Given the availability of TLLMs in the last five years, there is a significant lack of research related to integrating TLLMs in the assessment of open-ended engineering case studies. My dissertation study aims to fill this research gap. The results of my study showed moderate accuracy in thematically analyzing students' open-ended responses to two different engineering case scenarios. My dissertation demonstrates to community members that TLLMs have potential for positive impacts on improving classroom practices in engineering education. This study can encourage engineering education researchers to utilize these NLP methods that may be helpful in analyzing the vast textual data generated in engineering education, thereby reducing the number of missed opportunities to glean information for actors and agents in engineering education. Doctor of Philosophy

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    VTechWorks
    Doctoral thesis . 2023
    License: CC BY NC
    Data sources: VTechWorks
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      VTechWorks
      Doctoral thesis . 2023
      License: CC BY NC
      Data sources: VTechWorks
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Allen, Amy E.; Kavanagh, Anne Marie; ni Cassaithe, Caitriona;

    Accepted version

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    VTechWorks
    Part of book or chapter of book . 2023
    Data sources: VTechWorks
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      VTechWorks
      Part of book or chapter of book . 2023
      Data sources: VTechWorks
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Vikram Mohanty; Kurt Luther;

    Historical photos are valuable for their cultural and economic significance, but can be difficult to identify accurately due to various challenges such as low-quality images, lack of corroborating evidence, and limited research resources. Misidentified photos can have significant negative consequences, including lost economic value, incorrect historical records, and the spread of misinformation that can lead to perpetuating conspiracy theories. To accurately assess the credibility of a photo identification (ID), it may be necessary to conduct investigative research, use domain knowledge, and consult experts. In this paper, we introduce DoubleCheck, a quality assessment framework for verifying historical photo IDs on Civil War Photo Sleuth (CWPS), a popular online platform for identifying American Civil War-era photos using facial recognition and crowdsourcing. DoubleCheck focuses on improving CWPS's user experience and system architecture to display information useful for assessing the quality of historical photo IDs on CWPS. In a mixed-methods evaluation of DoubleCheck, we found that users contributed a wide diversity of sources for photo IDs, which helped facilitate the community's assessment of these IDs through DoubleCheck's provenance visualizations. Further, DoubleCheck's quality assessment badges and visualizations supported users in making accurate assessments of photo IDs, even in cases involving ID conflicts. Comment: Accepted to ACM Journal on Computing and Cultural Heritage (JOCCH)

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    VTechWorks
    Other literature type . 2023
    Data sources: VTechWorks
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    arXiv.org e-Print Archive
    Other literature type . Preprint . 2023
    https://doi.org/10.48550/arxiv...
    Article . 2023
    License: CC BY NC ND
    Data sources: Datacite
    image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
    VTechWorks
    Other literature type . 2023
    Data sources: VTechWorks
    Journal on Computing and Cultural Heritage
    Article . 2023 . Peer-reviewed
    Data sources: Crossref
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    Access Routes
    Green
    bronze
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      VTechWorks
      Other literature type . 2023
      Data sources: VTechWorks
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      arXiv.org e-Print Archive
      Other literature type . Preprint . 2023
      https://doi.org/10.48550/arxiv...
      Article . 2023
      License: CC BY NC ND
      Data sources: Datacite
      image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
      VTechWorks
      Other literature type . 2023
      Data sources: VTechWorks
      Journal on Computing and Cultural Heritage
      Article . 2023 . Peer-reviewed
      Data sources: Crossref
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Lei, Shuo;

    Recent advances in large neural network-style models have demonstrated great performance in various applications, such as image generation, question answering, and audio classification. However, these deep and high-capacity models require a large amount of labeled data to function properly, rendering them inapplicable in many real-world scenarios. This dissertation focuses on the development and evaluation of advanced machine learning algorithms to solve the following research questions: (1) How to learn novel classes with limited labeled data, (2) How to adapt a large pre-trained model to the target domain if only unlabeled data is available, (3) How to boost the performance of the few-shot learning model with unlabeled data, and (4) How to utilize limited labeled data to learn new classes without the training data in the same domain. First, we study few-shot learning in text classification tasks. Meta-learning is becoming a popular approach for addressing few-shot text classification and has achieved state-of-the-art performance. However, the performance of existing approaches heavily depends on the interclass variance of the support set. To address this problem, we propose a TART network for few-shot text classification. The model enhances the generalization by transforming the class prototypes to per-class fixed reference points in task-adaptive metric spaces. In addition, we design a novel discriminative reference regularization to maximize divergence between transformed prototypes in task-adaptive metric spaces to improve performance further. In the second problem we focus on self-learning in cross-lingual transfer task. Our goal here is to develop a framework that can make the pretrained cross-lingual model continue learning the knowledge with large amount of unlabeled data. Existing self-learning methods in crosslingual transfer tasks suffer from the large number of incorrectly pseudo-labeled samples used in the training phase. We first design an uncertainty-aware cross-lingual transfer framework with pseudo-partial-labels. We also propose a novel pseudo-partial-label estimation method that considers prediction confidences and the limitation to the number of candidate classes. Next, to boost the performance of the few-shot learning model with unlabeled data, we propose a semi-supervised approach for few-shot semantic segmentation task. Existing solutions for few-shot semantic segmentation cannot easily be applied to utilize image-level weak annotations. We propose a class-prototype augmentation method to enrich the prototype representation by utilizing a few image-level annotations, achieving superior performance in one-/multi-way and weak annotation settings. We also design a robust strategy with softmasked average pooling to handle the noise in image-level annotations, which considers the prediction uncertainty and employs the task-specific threshold to mask the distraction. Finally, we study the cross-domain few-shot learning in the semantic segmentation task. Most existing few-shot segmentation methods consider a setting where base classes are drawn from the same domain as the new classes. Nevertheless, gathering enough training data for meta-learning is either unattainable or impractical in many applications. We extend few-shot semantic segmentation to a new task, called Cross-Domain Few-Shot Semantic Segmentation (CD-FSS), which aims to generalize the meta-knowledge from domains with sufficient training labels to low-resource domains. Then, we establish a new benchmark for the CD-FSS task and evaluate both representative few-shot segmentation methods and transfer learning based methods on the proposed benchmark. We then propose a novel Pyramid-AnchorTransformation based few-shot segmentation network (PATNet), in which domain-specific features are transformed into domain-agnostic ones for downstream segmentation modules to fast adapt to unseen domains. Nowadays, deep learning techniques play a crucial role in our everyday existence. In addition, they are crucial to the success of many e-commerce and local businesses for enhancing data analytics and decision-making. Notable applications include intelligent transportation, intelligent healthcare, the generation of natural language, and intrusion detection, among others. To achieve reasonable performance on a new task, these deep and high-capacity models require thousands of labeled examples, which increases the data collection effort and computation costs associated with training a model. Moreover, in many disciplines, it might be difficult or even impossible to obtain data due to concerns such as privacy and safety. This dissertation focuses on learning with limited labeled data in natural language processing and computer vision tasks. To recognize novel classes with a few examples in text classification tasks, we develop a deep learning-based model that can capture both cross- task transferable knowledge and task-specific features. We also build an uncertainty-aware self-learning framework and a semi-supervised few-shot learning method, which allow us to boost the pre-trained model with easily accessible unlabeled data. In addition, we propose a cross-domain few-shot semantic segmentation method to generalize the model to different domains with a few examples. By handling these unique challenges in learning with limited labeled data and developing suitable approaches, we hope to improve the efficiency and generalization of deep learning methods in the real world. Doctor of Philosophy

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
    VTechWorks
    Doctoral thesis . 2023
    Data sources: VTechWorks
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
      VTechWorks
      Doctoral thesis . 2023
      Data sources: VTechWorks
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Ng, Wen Nie;

    The goal of the paper is to enhance the metadata standard of fashion collections by expanding the controlled vocabulary and metadata elements for Costume Core, a metadata schema designed specifically for fashion artifacts. Various techniques are employed to achieve this goal, including identifying new descriptors using word embedding similarity measurements and adding new descriptive terms for precise artifact descriptions to use when re-cataloging a university fashion collection in Costume Core. The paper also provides a sneak peek of the Model Output Confirmative Helper Application, which simplifies the vocabulary review process. Additionally, a survey was conducted to collect insights into how other fashion professionals use metadata when describing dress artifacts. The survey results reveal 1) commonly used metadata standards in the historic fashion domain; 2) sample metadata respondents use; and 3) partial potential metadata that can be appended to Costume Core, which is relevant to Virginia Tech’s Oris Glisson Historic Costume and Textile Collection. The expanded Costume Core resulting from the project offers a more comprehensive way of describing fashion collection holdings/artifacts. It has the potential to be adopted by the fashion collections to produce metadata that is findable, accessible, interoperable, and reusable. 1. Abstract was peer-reviewed 2. Slides and presentation were prepared and presented by Wen Nie Ng based on a previously published article, incorporating updates on ongoing projects that have stemmed from the grant: Smith, D., Ng, W. N., McIrvin, C., Miller, C., Spencer, J. “Comparative Study and Expansion of Metadata Standards for Historic Fashion Collections.” Visual Resources Association Bulletin 50, no. 1 (June 2023). https://online.vraweb.org/index.php/vrab/article/view/ 3. Educational component or the relevance to attendees: The information shared in this presentation may benefit a diverse range of individuals due to the widespread impact of fashion on culture and society, both historically and in the 21st century. Additionally, the general public composes the majority of online users of digital fashion collections. Therefore, extending the metadata schema to capture vocabulary likely used by online users will result in satisfactory searches on the user end. 4. Slides were created on Google Slides and exported as pptx file Virginia Tech University Libraries Collaborative Research Grant Summer 2022

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    VTechWorks
    Presentation . 2023
    License: CC BY NC SA
    Data sources: VTechWorks
    image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
    VTechWorks
    Presentation . 2023
    Data sources: VTechWorks
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      VTechWorks
      Presentation . 2023
      License: CC BY NC SA
      Data sources: VTechWorks
      image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
      VTechWorks
      Presentation . 2023
      Data sources: VTechWorks
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Gamieldien, Yasir;

    This dissertation is about using artificial intelligence (AI) to help researchers and teachers understand how students learn from their exams. Exams are not only a way to measure what students know, but also a chance for students to reflect on how they studied and what they can do better next time. One way that students can reflect is by using exam wrappers, which are short questions that students answer after they get their graded exams back. A type of AI called natural language processing (NLP) is used in this dissertation, which can analyze text and find patterns and meanings in it. This study also uses a powerful AI tool called GPT-3.5, which can generate text and answer questions. The dissertation has three manuscripts that compare the traditional way of analyzing exam wrappers, which is done by hand, with the new way of using NLP and GPT-3.5, evaluate a specific promising NLP method, and use this method to try and gain a deeper understanding in students self-regulated learning (SRL) while preparing for exams. The data comes from 3,800 exam wrappers from a physics course for engineering students. The first manuscript develops a way of using NLP and GPT-3.5 to find out what learning strategies and goals students talk about in their exam wrappers and compares it to more traditional methods of analysis. The second manuscript tests how accurate a specific NLP technique is in finding these strategies and goals. The third manuscript looks at how different students use different strategies and goals depending on how well they did on the exams using the NLP technique in the second manuscript. I found that NLP and GPT-3.5 can aid in analyzing exam wrappers faster and provide nuanced insights when compared with manual approaches. The dissertation also shows what learning strategies and goals are most discussed for engineering students as they prepare for exams. The dissertation gives some suggestions, challenges, and ideas for future research on AI and learning from exams. This dissertation explores the use of natural language processing (NLP) and large language models (LLMs) to analyze student self-regulated learning (SRL) strategies in response to exam wrappers. Exam wrappers are structured reflection activities that prompt students to practice SRL after they get their graded exams back. The dissertation consists of three manuscripts that compare traditional qualitative analysis with NLP-assisted approaches using transformer-based models including GPT-3.5, a state-of-the-art LLM. The data set comprises 3,800 student responses from an engineering physics course. The first manuscript develops two NLP-assisted codebooks for identifying learning strategies related to SRL in exam wrapper responses and evaluates the agreement between them and traditional qualitative analysis. The second manuscript applies a novel NLP technique called zero-shot learning (ZSL) to classify student responses into the codes developed in the first manuscript and assesses the accuracy of this method by evaluating a subset of the full dataset. The third manuscript identifies the distribution and differences of learning strategies and SRL constructs among students of different exam performance profiles using the results from the second manuscript. The dissertation demonstrates the potential of NLP and LLMs to enhance qualitative research by providing scalable, robust, and efficient methods for analyzing large corpora of textual data. The dissertation also contributes to the understanding of SRL in engineering education by revealing the common learning strategies, impediments, and SRL constructs that students report they use while preparing for exams in a first-year engineering physics course. The dissertation suggests implications, limitations, and directions for future research on NLP, LLMs, and SRL. Doctor of Philosophy

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
    VTechWorks
    Doctoral thesis . 2023
    Data sources: VTechWorks
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
      VTechWorks
      Doctoral thesis . 2023
      Data sources: VTechWorks
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Mohanty, Vikram;

    Identifying individuals in historical photographs is important for preserving material culture, correcting historical records, and adding economic value. Historians, antiques dealers, and collectors often rely on manual, time-consuming approaches. While Artificial Intelligence (AI) offers potential solutions, it's not widely adopted due to a lack of specialized tools and inherent inaccuracies and biases. In my dissertation, I address this gap by combining the complementary strengths of human intelligence and AI. I introduce Photo Sleuth, a novel person identification pipeline that combines crowdsourced expertise with facial recognition, supporting users in identifying unknown portraits from the American Civil War era (1861--65). Despite successfully identifying numerous unknown photos, users often face the `last-mile problem' --- selecting the correct match(es) from a shortlist of high-confidence facial recognition candidates while avoiding false positives. To assist experts, I developed Second Opinion, an online tool that employs a novel crowdsourcing workflow, inspired by cognitive psychology, effectively filtering out up to 75% of facial recognition's false positives. Yet, as AI models continually evolve, changes in the underlying model can potentially impact user experience in such crowd--expert--AI workflows. I conducted an online study to understand user perceptions of changes in facial recognition models, especially in the context of historical person identification. Our findings showed that while human-AI collaborations were effective in identifying photos, they also introduced false positives. To reduce these misidentifications, I built Photo Steward, an information stewardship architecture that employs a deliberative workflow for validating historical photo identifications. Building on this foundation, I introduced DoubleCheck, a quality assessment framework that combines community stewardship and comprehensive provenance information, for helping users accurately assess photo identification quality. Through my dissertation, I explore the design and deployment of human-AI collaborative tools, emphasizing the creation of sustainable online communities and workflows that foster accurate decision-making in the context of historical photo identification. Identifying historical photos offers significant cultural and economic value; however, the identification process can be complex and challenging due to factors like poor source material and limited research resources. In my dissertation, I address this problem by leveraging the complementary strengths of human intelligence and Artificial Intelligence (AI). I built Photo Sleuth, an online platform, that helps users in identifying unknown portraits from the American Civil War era. This platform employs a novel person identification workflow that combines crowdsourced human expertise and facial recognition. While AI-based facial recognition is effective at quickly scanning thousands of photos, it can sometimes present challenges. Specifically, it provides the human expert with a shortlist of highly similar-looking candidates from which the expert must discern the correct matches; I call this as the `last-mile problem' of person identification. To assist experts in navigating this challenge, I developed Second Opinion, a tool that employs a novel crowdsourcing workflow inspired by cognitive psychology, named seed-gather-analyze. Further, I conducted an online study to understand the influence of changes in the underlying facial recognition models on the downstream person identification tasks. While these tools enabled numerous successful identifications, they also occasionally led to misidentifications. To address this issue, I introduced Photo Steward, an information stewardship architecture that encourages deliberative decision-making while identifying photos. Building upon the principles of information stewardship and provenance, I then developed DoubleCheck, a quality assessment framework that presents pertinent information, aiding users in accurately evaluating the quality of historical photo IDs. Through my dissertation, I explore the design and deployment of human-AI collaborative tools, emphasizing the creation of sustainable online communities and workflows that encourage accurate decision-making in the context of historical photo identification. Doctor of Philosophy

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    VTechWorks
    Doctoral thesis . 2023
    License: CC BY NC ND
    Data sources: VTechWorks
    image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
    VTechWorks
    Doctoral thesis . 2023
    Data sources: VTechWorks
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      VTechWorks
      Doctoral thesis . 2023
      License: CC BY NC ND
      Data sources: VTechWorks
      image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
      VTechWorks
      Doctoral thesis . 2023
      Data sources: VTechWorks
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Janardhan Reddy, Rathvik;

    The architecture of tragedy is a complex and emotive topic that explores the ways in which design elements can be used to commemorate and remember significant events. This thesis aims to examine the role of architecture in the representation of tragedy, with a specific focus on how design elements such as light, shadow, materiality, and spatial arrangement can evoke emotions and tell a story. The thesis will begin by examining the historical context of architecture and tragedy, looking at examples from ancient civilizations to contemporary times. It will then move on to explore the ways in which tragedy has been represented in architecture, examining key design elements and their impact on the viewer. Case studies will illustrate how architecture has been used to commemorate tragedies such as the Holocaust, 9/11, and Fukushima disasters. The thesis will also explore the ethical implications of using architecture to represent tragedy, including questions about appropriateness, respect, and memory. It will examine the potential for architecture to create a sense of healing and closure for those affected by tragedy and the potential to be misused or exploited for political or commercia l gain. Ultimately, this thesis aims to comprehensively examine the relationship between architecture and tragedy, highlighting the importance of design elements in telling a story and commemorating significant events. It will explore the ways in which architecture can be used to create a sense of empathy and understanding while also acknowledging the complex ethical issues involved in representing tragedy through design. The relationship between architecture and tragedy has long been intertwined, serving as a means of expression, storytelling, and commemoration. The role of design elements such as light, shadow, materiality, and spatial arrangement in evoking emotions and telling a story has been significant in depicting tragedy in architecture. This thesis explores the ways in which architecture has been used to represent tragedy, examining key design elements and their impact on the viewer. Case studies, including the Holocaust Memorial in Berlin, the 9/11 Memorial in New York, and the Fukushima Memorial in Japan, illustrate how architecture has been used to commemorate and remember significant events. Master of Architecture

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
    VTechWorks
    Thesis . 2023
    Data sources: VTechWorks
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
      VTechWorks
      Thesis . 2023
      Data sources: VTechWorks
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Gunturi, Uma Sushmitha;

    Experiences of interpersonal racism persist as a prevalent reality for BIPOC (Black, Indigenous, People of Color) in the United States. One form of racism that often goes unnoticed is racial microaggressions. These are subtle acts of racism that leave victims questioning the intent of the aggressor. The line of offense is often unclear, as these acts are disguised through humor or seemingly harmless intentions. In this study, we analyze the language used in online racial microaggressions ("Acts") and compare it to personal narratives recounting experiences of such aggressions ("Recalls") by Black social media users. We curated a corpus of acts and recalls from social media discussions on platforms like Reddit and Tumblr. Additionally, we collaborated with Black participants in a workshop to hand-annotate and verify the corpus. Using natural language processing techniques and qualitative analysis, we examine the language underlying acts and recalls of racial microaggressions. Our goal is to understand the lexical patterns that differentiate the two in the context of racism in the U.S. Our findings indicate that neural language models can accurately classify acts and recalls, revealing contextual words that associate Blacks with objects that perpetuate negative stereotypes. We also observe overlapping linguistic signatures between acts and recalls, serving different purposes, which have implications for current challenges in social media content moderation systems. Racial Microaggressions are expressions of human biases that are subtly disguised. The differences in language and themes used in instances of Racial Microaggressions ("Acts") and the discussions addressing them ("Recalls") on online communities have made it difficult for researchers to automatically quantify and extract these differences. In this study, we introduce a tool that can effectively distinguish acts and recalls of microaggressions. We utilize Natural Language Processing techniques to classify and identify key distinctions in language usage and themes. Additionally, we employ qualitative methods and engage in workshop discussions with Black participants to interpret the classification results. Our findings reveal common linguistic patterns between acts and recalls that serve opposing purposes. Acts tend to stereotype and degrade Black people, while recalls seek to portray their discomfort and seek validation for their experiences. These findings highlight why recalls are often considered toxic in online communities. This also represents an initial step towards creating a socio-technical system that safeguards the experiences of racial minority groups. Master of Science

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    VTechWorks
    Thesis . 2023
    Data sources: VTechWorks
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    VTechWorks
    Thesis . 2023
    License: CC BY
    Data sources: VTechWorks
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      VTechWorks
      Thesis . 2023
      Data sources: VTechWorks
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      VTechWorks
      Thesis . 2023
      License: CC BY
      Data sources: VTechWorks
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Dodson, Terryl Dwayne;

    Historical photographs can generate significant cultural and economic value, but often their subjects go unidentified. However, if these photographs are analyzed correctly, clues in these photographs can open up new directions in identifying unknown subjects. For example, many 19th century photographs contain painted backdrops that can be mapped to a specific photographer or location, but this research process is often manual, time-consuming, and unsuccessful. Artificial Intelligence-based computer vision techniques could be used to automatically identify painted backdrops or photographers or group together photos with similar backdrops in order to aid researchers. However, it is unknown which computer vision techniques are feasible for painted backdrop identification or which techniques work better than others. We present three studies comparing four different types of computer vision techniques – Inception, CLIP, MAE, and pHash – across a variety of metrics. We find that a workflow that combines the CLIP computer vision technique, software that automatically classifies photo backgrounds, and simulated human feedback performs best. We also discuss implications for collaboration between humans and AI for analyzing images and new possibilities for academic research combining technology and history. Historical photographs can generate significant cultural and economic value, but often their subjects go unidentified. However, if analyzed correctly, visual clues in these photographs can open up new directions in identifying unknown subjects. For example, many 19th century photographs contain painted backdrops that can be mapped to a specific photographer or location, but this research process is often manual, time-consuming, and unsuccessful. AI-based computer vision algorithms could be used to automatically identify painted backdrops or photographers or cluster photos with similar backdrops in order to aid researchers. However, it is unknown which computer vision algorithms are feasible for painted backdrop identification or which techniques work better than others. We present three studies evaluating four different types of image embeddings – Inception, CLIP, MAE, and pHash – across a variety of metrics and techniques. We find that a workflow using CLIP embeddings combined with a background classifier and simulated user feedback performs best. We also discuss implications for human-AI collaboration in visual analysis and new possibilities for digital humanities scholarship. Master of Science

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
    VTechWorks
    Thesis . 2023
    Data sources: VTechWorks
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ VTechWorksarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
      VTechWorks
      Thesis . 2023
      Data sources: VTechWorks
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.