Culture

We strive to build an interdisciplinary team working in the area of understanding humans and improving overall human's wellbeing. I called my area of research Human AI Interaction. It's an interdisciplinary area intersecting technology and humans. Our core values are:

  • Experimental: We value scientific rigor, focusing on researching under strong scientific grounds and conducting sound experiments that provide definitive and repeatable findings.
  • Computational: Our scientific nature is to use algorithms, mathematical models, strong theoretical background, and strong coding skills.

Lab Entry

We welcomed students from all disciplines but those with a huge passion for research.

Problems we are interested

We are mostly interested in problems related to modeling human behaviors and emotions using computational models or algorithms and improving human well-being and health. Based on our natural instincts in technical stuffs, we are also interested in computer vision and voice problems.

Modeling humans

  1. Modeling humans through EEG: How does EEG profile looks like when humans are stressed? engaged? mindful? etc. Then can we create an intervention system to improve these properties?
  2. Modeling humans through language: How can we model beliefs, emotion, cognition, behaviors, culture, and biases, etc. in language?
  3. Understanding humans through voice and face: Can we detect depression, confidence, humours, or risk profiles from a person's voice? What should we ask him/her to speak to detect that? How long should that voice be? Does it work for only certain languages? Does it get better results if we combine with facial features?

Well-being and health

  1. Brain and Medical Imaging: What are some imaging tasks that doctors cannot do very well but AI can? How to incorporate AI to the current workflow? How to build explaninability and trust? How reasoning can be done? Can we incorporate VQA? How does traditional computer vision techniques compared with modern techniques like semi-supervised or self-supervised learning? How to deal with millions of unlabeled data? Can we use diffusion or generative AI to create infinite medical images?
  2. Raman Spectroscopy: How to measure blood glucose non-invasively using Raman spectroscopy? Can we put them into a portable low-cost machine and deploy them into households?
  3. EEG BCI Speller: How to develop an effective and user-friendly BCI speller for locked-in patients using EEG?
  4. Chatbot for emotional support: Can we incorporate CBT/DBT/ACT to chatbot to provide emotional support to people suffering from stress or depression?
  5. Medical literatures summarization: How to help doctors summarize thousands of documents? How to deal with long context windows? How to summarize for different domains which have different requirements? How do we integrate with clinical trials or evidence or some sort of knowledge base? How to handle ambiguity and conflict information? How to enable interactive summarization? How to validate these summaries? How to combine with clinical workflows?
  6. Medical Visual Question Answering (VQA): How to answer complex, image-based questions for medical diagnosis and treatment? How to ensure accuracy and clinical relevance in VQA responses? How to handle variability in image quality and modality across different medical fields? How to interpret multi-modal data (e.g., combining text and images) effectively for comprehensive responses? How to address ambiguity or uncertainty in medical images and communicate this effectively to users?
  7. GIS applications for water, air, agriculture data management: Thailand has a lot of flood, while the region has a lot of problem about water sanitation and air pollution. Can we create a comprehensive GIS application for monitoring and analytics?

Computer Vision

  1. Car damage prediction system: How to design and develop a system that could accurately and reliably predict car damage?
  2. House damage prediction system: How to design and develop a system that could accurately and reliably predict house damage, after flood, which is very common in Thailand?
  3. VQA: Can we create VQA that understand images in various dimensions: semantic, spatial (where is what), numerical (how many), role (who are they), and emotion (what is happening)? But maybe the first question is how to streamline the generation of a high quality dataset?

Voice

  1. Speech2Text / Text2Speech : Most Thai Speech2Text / Text2Speech still does not work quite well. We need a better dataset, engineering, and training approaches.
  2. Voice cloning: Thailand does not yet have a model for voice cloning. Voice cloning would be useful for the purpose of education and tourism.