Ph.D., Electrical & Computer Engineering | FAMU-FSU College of Engineering, Tallahassee, FL (_August 2024) |
M.S., Computer Information Sciences | Florida Agricultural and Mechanical University, Tallahassee, FL (July 2021) |
B.S., Computer Science | University of Central Florida, Orlando, FL (December 2019) |
Assistant Professor @ University of West Florida, Department of Cybersecurity and Information Technology (August 2024 - Present)
Research Assistant @ SPADAL Lab, FAMU-FSU College of Engineering (August 2021 - July 2024)
AI/ML Researcher Intern @ Apple (May 2023 - September 2023)
Adjunct Faculty @ School of Architecture & Engineering Technology, FAMU (August 2023 - December 2023)
This project addresses the growing threat of synthetic voice spoofing in social applications by proposing a hybrid deep learning model combining MobileNet CNN and Temporal Convolutional Networks (TCN). The model extracts spatial and temporal features from Mel-Spectrograms to detect fake audio with high accuracy. Tested on the Fake-or-Real (FoR) dataset, it achieved up to 99.89% training precision and 99% average precision, proving effective for real-time use in resource-limited settings.
Publication | Open-Source Framework
ARGObot is an AI-powered academic advising system designed to support student success by offering personalized guidance based on university policies. Addressing challenges like limited advisor availability, it uses a Large Language Model (LLM) with a multi-agent architecture, incorporating Retrieval-Augmented Generation (RAG) from verified sources, email integration for human-in-the-loop, and web search to expand its knowledge. Originally built on Gemini 1 Pro, the system was upgraded to GPT-4 with text-embedding-ada-002, significantly improving performance. This project compares both versions and shows how the enhanced design better addresses student queries with accurate, context-aware responses.
This study explores how commercial Large Vision-Language Models (LVLMs) like LLAVA and multimodal GPT-4 can be misused to bypass CAPTCHA and reCAPTCHA systems. Findings show that these models can interpret visual challenges and respond accurately without needing adversarial techniques, raising concerns about their potential use in bot-driven fraud and unauthorized account access using standard, unmodified models.
Publication | News Coverage | Open-Source Framework
Blockchain-Based Decentralized Federated Learning Framework for Digital Forensic Applications & Deepfake Detection (BFDD), an innovative solution designed to support law enforcement agencies in authenticating digital evidence. The framework enhances targeted data extraction and improves deepfake detection capabilities using advanced machine learning techniques. By bolstering evidence authentication processes, the BFDD aims to reinforce the reliability of digital media in judicial settings, addressing critical issues in digital forensics and the integrity of evidence.
In this project, we addressed the challenges of manual accessibility testing, by introducing a novel system that utilizes Large Language Models (LLMs) and pixel-based UI understanding models to automate accessibility testing. This system is designed to take manual test commands, such as “Search for a show in VoiceOver,” and execute them, generating chaptered, navigable videos. These videos include heuristic-based flags for potential accessibility issues, such as improper text resizing or navigation loops in VoiceOver.
In this project, we explored the use of Large Language Model (LLM)-based generative AI tools, such as OpenAI CodeX, CodeBert, and ChatGPT, as well as static analysis tools such as FlawFinder, Visual Code Grepper, Clang Tidy, CyBERT, Snyk, which are increasingly popular among developers for coding assistance. These tools assist in generating and completing code based on user input and instructions, but the quality and security of the generated code can vary significantly. Factors affecting this include the complexity of tasks, clarity of instructions, and the AI’s familiarity with specific programming languages. Our study conducts a thorough analysis and comparison of the code generation and completion capabilities of these models, particularly focusing on the security implications of the code they produce.
This research project examines the fairness and efficiency of pretrained language models (PTMs) like ELECTRA and BERT in industrial applications such as HR and targeted advertising. ELECTRA excels in token prediction with less computational demand, making it apt for broad usage, while it’s analyzed for potential biases using datasets with racially and gender-associated names. In contrast, BERT’s effectiveness in language representation is scrutinized for biases through its tokenization and masking processes by measuring the cosine similarity between tokens and their replacements.
The study aims to understand and mitigate inherent biases in these models, thereby enhancing fairness in AI-driven applications and promoting equity in machine learning across various industries.