Maryam Taeb

Logo

View My GitHub Profile

AI/ML Researcher
HCI, Cybersecurity
FAMU-FSU College of Engineering

Download Resume

Technical Skills: TensorFlow, Scikit-learn, Python, SQL, Shell scripting, JavaScript, MATLAB

Education

Work Experience

Research Assistant @ SPADAL Lab, FAMU-FSU College of Engineering (August 2021 - Present)

AI/ML Researcher Intern @ Apple (May 2023 - September 2023)

Adjunct Faculty @ School of Architecture & Engineering Technology, FAMU (August 2023 - December 2023)

Projects

A Blockchain-Based Decentralized Federated Learning Framework for Deepfake Detection in Digital Forensic Scenarios

Publication | News Coverage | Open-Source Framework

Blockchain-Based Decentralized Federated Learning Framework for Digital Forensic Applications & Deepfake Detection (BFDD), an innovative solution designed to support law enforcement agencies in authenticating digital evidence. The framework enhances targeted data extraction and improves deepfake detection capabilities using advanced machine learning techniques. By bolstering evidence authentication processes, the BFDD aims to reinforce the reliability of digital media in judicial settings, addressing critical issues in digital forensics and the integrity of evidence.

BFDD

AXNav: Replaying Accessibility Tests from Natural Language

Publication

In this project, we addressed the challenges of manual accessibility testing, by introducing a novel system that utilizes Large Language Models (LLMs) and pixel-based UI understanding models to automate accessibility testing. This system is designed to take manual test commands, such as “Search for a show in VoiceOver,” and execute them, generating chaptered, navigable videos. These videos include heuristic-based flags for potential accessibility issues, such as improper text resizing or navigation loops in VoiceOver.

AXNAV

Assessing the Effectiveness and Security Implications of AI Code Generators

Publication | Publication

In this project, we explored the use of Large Language Model (LLM)-based generative AI tools, such as OpenAI CodeX, CodeBert, and ChatGPT, as well as static analysis tools such as FlawFinder, Visual Code Grepper, Clang Tidy, CyBERT, Snyk, which are increasingly popular among developers for coding assistance. These tools assist in generating and completing code based on user input and instructions, but the quality and security of the generated code can vary significantly. Factors affecting this include the complexity of tasks, clarity of instructions, and the AI’s familiarity with specific programming languages. Our study conducts a thorough analysis and comparison of the code generation and completion capabilities of these models, particularly focusing on the security implications of the code they produce.

AXNAV

Fairness in Pre-trained Language Models

Publication | Publication

This research project examines the fairness and efficiency of pretrained language models (PTMs) like ELECTRA and BERT in industrial applications such as HR and targeted advertising. ELECTRA excels in token prediction with less computational demand, making it apt for broad usage, while it’s analyzed for potential biases using datasets with racially and gender-associated names. In contrast, BERT’s effectiveness in language representation is scrutinized for biases through its tokenization and masking processes by measuring the cosine similarity between tokens and their replacements.

The study aims to understand and mitigate inherent biases in these models, thereby enhancing fairness in AI-driven applications and promoting equity in machine learning across various industries.

Fairness

Talks & Lectures

Publications

  1. Taeb, Maryam, et al. “AXNav: Replaying Accessibility Tests from Natural Language.” CHI conference on Human Factors in Computing Systems 11-16 May 2024
  2. Taeb, Maryam, Chi, Hongmei, Bernadin, Shonda. “Assessing the Effectiveness and Security Implications of AI Code Generators.” The Colloquium for Information Systems Security Education (CISSE 2023), Nov 1-3, 2023
  3. Taeb, Maryam, Chi, Hongmei, Bernadin, Shonda. “Targeted Data Extraction and Deepfake Detection with Blockchain Technology.” International Conference on Universal Village (IEEE UV2022), Oct 22-25, 2022
  4. Taeb, Maryam, Torres, Yonathan, Chi, Hongmei, Bernadin, Shonda. “Investigating Gender and Racial Bias in ELECTRA.” International conference on Computational Science & Computational Intelligence (CSCI’22), Dec 14-16, 2022
  5. Elliston, J., Chi, H., Bernadin, S., & Taeb, M, “Integrating Blockchain Technology into Cybersecurity Education.” Future Technologies Conference (FTC) 2022, Nov 2-3, 2022
  6. Taeb, M., & Bernadin, S, “Broadening Participation in URE Using PS-MMM-based Mentoring for URM Engineering Students.” The Chronicle of Mentoring & Coaching conference 2022, Oct 23-27, 2022
  7. Taeb, Maryam, Chi, Hongmei, Bernadin, Shonda (2022). “Digital Evidence Acquisition and Deepfake Detection with Decentralized Applications.” Practice and Experience in Advanced Research Computing (PEARC). July 10-14, 2022
  8. Taeb, Maryam, & Chi, Hongmei (2022). “Comparison of Deepfake Detection Techniques through Deep Learning.” Journal of Cybersecurity and Privacy, 2(1), 89-106.
  9. Taeb, Maryam, Chi, Hongmei, Yan, Jie, “Applying Machine Learning to Analyze Anti-vaccination on Tweets.” IEEE International Conference on Big Data (BDA COVID-2021), Dec 15-18, 2021
  10. Taeb, Maryam and Hongmei Chi, “A Personalized Learning Framework for Software Vulnerability Detection and Education.” 2021 International Workshop on Cyber Security (CSW) Aug 13-15, 2021
  11. Taeb, Maryam, Chi, Hongmei, Jones, Edward. L. et al. “Inherent Discriminability of BERT towards racial Minority Associated Data”, The 21st International Conference on Computational Science and Applications (ICCSA 2021), Sept 13-16, 2021
  12. A Ali, K Adjei, S Fatimah, K Ezendu, M Taeb, H Chi, C King, V Diaby, “Using Twitter to Examine Public Perceptions about COVID-19 in the United States: A Sentiment Analysis”, ISPOR, May 17-20