Computer Science Innovation Research Lab
Work in Progress
Visit us again soon
Slide 2 Heading
Lorem ipsum dolor sit amet consectetur adipiscing elit dolor
Click Here
Slide 3 Heading
Lorem ipsum dolor sit amet consectetur adipiscing elit dolor
Click Here
Research Papers Summary
Catch-22 | AUTHORS |
CO-AUTHORS |
|---|---|
Brylle Ace Nuñez Justin Clark Ong Maryrose Pergis Kyle Santos |
Roselle Gardon Lorena Rabago |
The sudden rise of generative AI has raised concerns for the effects of AI deepfakes and the possibility of model collapse from training on AI-generated data. Major generative AI vendors Google, Meta, and Stability AI encode invisible watermarks in their output to combat this, but the fragmentation in algorithms and lack of literature quantifying their limitations result in impracticality for real-world usage. Ultraviolet addresses this by supporting the visual watermarking algorithms from the three vendors in a modular framework, as well as benchmarking their robustness against cropping, filtering, scaling, and compression attacks. The study finds that Google’s SynthID is the most robust across all distortions except cropping with an 84% recall, at the cost of minimal false positives.
AUTHORS |
CO-AUTHORS |
|---|---|
Kai Butalid Mon David Olarte Liam Miguel Supremo Jan Michael Villeza |
Jojo Castillo Lorena Rabago |
The rise of short-form video platforms such as TikTok, YouTube Shorts, and Instagram Reels has reshaped consumer product reviews into multimodal content that integrates text, audio, and visual elements. In the Philippine context, these reviews are frequently delivered in Taglish—a hybrid of Tagalog and English—posing challenges for traditional sentiment analysis models designed for monolingual and single-modality data. This study introduces a specialized multimodal sentiment analysis framework tailored to Taglish video reviews of computer peripherals. It processes transcribed text using a fine-tuned Multilingual BERT (mBERT), audio features via a Bidirectional Gated Recurrent Unit (Bi-GRU), and visual cues from facial expressions extracted through OpenFace 2.0, also modeled with Bi-GRU.
AUTHORS |
CO-AUTHORS |
|---|---|
Justine Denise Hernandez Jorenzo Martin Reyes Reejay Salinas Althea Noelle Sarmiento |
Eugenio Quesada Lorena Rabago |