Computer Science Innovation Research Lab
Work in Progress
Visit us again soon
Slide 2 Heading
Lorem ipsum dolor sit amet consectetur adipiscing elit dolor
Click Here
Slide 3 Heading
Lorem ipsum dolor sit amet consectetur adipiscing elit dolor
Click Here

Research Papers Summary

ULTRAVIOLET

A Unified Framework for Invisible Watermark Detection and Benchmarking
Catch-22 | AUTHORS
CO-AUTHORS
Brylle Ace Nuñez Justin Clark Ong Maryrose Pergis Kyle Santos
Roselle Gardon Lorena Rabago
The sudden rise of generative AI has raised concerns for the effects of AI deepfakes and the possibility of model collapse from training on AI-generated data. Major generative AI vendors Google, Meta, and Stability AI encode invisible watermarks in their output to combat this, but the fragmentation in algorithms and lack of literature quantifying their limitations result in impracticality for real-world usage. Ultraviolet addresses this by supporting the visual watermarking algorithms from the three vendors in a modular framework, as well as benchmarking their robustness against cropping, filtering, scaling, and compression attacks. The study finds that Google’s SynthID is the most robust across all distortions except cropping with an 84% recall, at the cost of minimal false positives.

Multimodal Sentiment

Analysis of Taglish Short-Form Reviews on Computer Peripherals
AUTHORS
CO-AUTHORS
Kai Butalid
Mon David Olarte
Liam Miguel Supremo
Jan Michael Villeza
Jojo Castillo
Lorena Rabago
The rise of short-form video platforms such as TikTok, YouTube Shorts, and Instagram Reels has reshaped consumer product reviews into multimodal content that integrates text, audio, and visual elements. In the Philippine context, these reviews are frequently delivered in Taglish—a hybrid of Tagalog and English—posing challenges for traditional sentiment analysis models designed for monolingual and single-modality data. This study introduces a specialized multimodal sentiment analysis framework tailored to Taglish video reviews of computer peripherals. It processes transcribed text using a fine-tuned Multilingual BERT (mBERT), audio features via a Bidirectional Gated Recurrent Unit (Bi-GRU), and visual cues from facial expressions extracted through OpenFace 2.0, also modeled with Bi-GRU.

SalinTala

Analysis of Taglish Short-Form Reviews on Computer Peripherals
AUTHORS
CO-AUTHORS
Justine Denise Hernandez
Jorenzo Martin Reyes
Reejay Salinas
Althea Noelle Sarmiento
Eugenio Quesada
Lorena Rabago
In the linguistically diverse Philippines, Filipino-English code-switching—commonly known as Taglish—has become a dominant mode of communication in digital and social contexts. However, existing multilingual machine translation (MT) systems struggle with mixed-language inputs, often resulting in unnatural phrasing and semantic inaccuracies. This study introduces SalinTala, a fine-tuned mBART-50 model designed to translate Taglish texts into grammatically correct English. Leveraging the Tweet Taglish dataset of authentic social media posts, the study applies a rigorous preprocessing pipeline to normalize slang, spelling variants, and noise while preserving code-switching structures.
Search

SIGN IN

Quickly access your grades, registration forms, student info, and more!