SIBGRAPI
2025
SEPTEMBER 30 to October 03, 2025
Salvador Bahia SENAI/CIMATEC
Promoted

Organization

Sponsorship



Support

- 00Dias
- 00Horas
- 00Minutos
- 00Segundos
Welcome To
Sibgrapi 2025
The Conference on Graphics, Patterns and Images (SIBGRAPI) is an international conference promoted annually by the Special Interest Group on Computer Graphics and Image Processing (Cegrapi) of the Brazilian Computing Society (SBC).
SIBGRAPI began in 1988 and combines contributions from computer graphics and vision, pattern recognition and image processing. It comprises the main conference and several co-located workshops and short courses. Being organized by an international committee, it offers high-quality information at low cost to students, academics and industry researchers.
The event will occur on September 30th to October 3rd, in Salvador, BA, Brazil. It is organized by the SENAI CIMATEC University.
The conference is held in conjunction with two other major events: SBGames 2025 – Symposium on Computer Games and Digital Entertainment, and SVR 2025 – Symposium on Virtual and Augmented Reality.
Sibgrapi 2025
Important dates
Main track
June 23 JUNE 30, 2025
Paper Submission Deadline
August 1, 2025
Reviews Available to Authors
August 8, 2025
Rebuttal Deadline
August 18, 2025
Notification of Conditional Acceptance or Rejection
August 31, 2025
Revised Paper Submission Deadline
September 7, 2025
Final Decision Notification
September 14, 2025
Camera-Ready Paper Due
Other Tracks (submission deadline)
April 11, 2025
April 18, 2025
C&G(Computers &
Graphics)
April 11, 2025
April 18, 2025
GRSL (GEOSCIENCE AND
Remote sensing letters)
July 15, 2025
July 22, 2025
TUtorials
July 7, 2025July 21, 2025
WTD (Workshop of
THeses and
Dissertations)
July 5, 2025July 31, 2025AUGUST 8, 2025
WUW (WORKSHOP ON
Undergraduate work)
August 11, 2025
WIP (Works in Progress)
July 7, 2025
WIA (Workshop of Industry Applications)
July 1, 2025
Thematic Workshop
Keynote
Speakers

Andrew Glassner
Wētā FX
Short Bio
Andrew Glassner is a Senior Research Engineer at Wētā FX, where he develops tools to help artists produce amazing visual effects for movies and television.
Glassner has served as Papers Chair for SIGGRAPH ’94, Founding Editor of the Journal of Computer Graphics Tools, and Editor-in-Chief of ACM Transactions on Graphics. Andrew is a well-known writer of numerous technical papers and books. Some of his books include the “Graphics Gems” series, the “Andrew Glassner’s Notebook” series, “Principles of Digital Image Synthesis,” and “Deep Learning: A Visual Approach.” His most recent book is “Quantum Computing: From Concepts to Code.” He has carried out research at the NYIT Computer Graphics Lab, Xerox PARC, Microsoft Research, the Imaginary Institute, Unity, and Wētā FX.
Glassner has written and directed live-action and animated films, written several novels and screenplays, and was writer-director of an online multiplayer murder-mystery game for The Microsoft Network.
In his spare time, Andrew paints, plays and writes music, and hikes.
Quantum Computing and Computer Graphics
Some technological revolutions change societies and the tools they depend on. Recently, electronics, computers, and cell phones have upended our cultures, and AI seems to be doing it again. Next on the horizon are quantum computers.
These devices – already built and working – offer us capabilities completely unlike those of classical computers. One of their key features is called quantum parallelism. This refers to the ability of a quantum computer to evaluate an arbitrary number of inputs (billions! trillions! any number you can dream of) simultaneously, in the time it takes to evaluate only one. Nothing is perfect, though: from these vast results, we can only extract one output at a time – and we usually cannot even choose which one we’ll get! Navigating this situation, and others like it, is leading us into a new art of programming based on new ideas.
When quantum computers become plentiful, cheap, and reliable (and they’re becoming more of all of these things every day), many of the algorithms we use every day in computer graphics will be radically changed. We’ll use quantum computing in tasks from modeling and rendering to simulation and interaction. In this talk I’ll discuss the key ideas underlying quantum computers, and speculate on their applications in computer graphics. Quantum computing will transform our field – this is the perfect time to prepare!

Ayush Bhargava
Meta Reality Labs
Short Bio
Dr. Ayush Bhargava is a User Experience Researcher at Meta Reality Labs. His work at Meta sits at the intersection of human perception, inputs and interaction pushing the boundaries of spatial computing on a regular basis. He uses the lens of perception to understand human behavior in order to improve the overall user experience for immersive experiences.
Dr. Bhargava earned his PhD in Computer Science from Clemson University focusing on affordance perception and interaction in Virtual Reality. His past work in the field of VR has covered a wide variety of topics including self-avatars, perceptual calibration, perception-action, educational simulations, 3D interaction, tangibles and Cybersickness.
Talk Title (TBD)
Comming Soon…

Diego Thomas
Kyushu University
Short Bio
Professor Diego Thomas completed his Master’s degree at ENSIMAG-INPG (Engineering School of Computer Science and Mathematics), Grenoble, France, in 2008. He received the Ph.D. degree from the National Institute of Informatics, Tokyo, Japan, in 2012, as a student of SOKENDAI. He is currently an Associate Professor at Kyushu University, Fukuoka, Japan, since 2023. His research interest is 3D Vision, motion synthesis, Computer Graphics and Digital humans. He is author or co-author 90 peer-reviewed journal/international conference papers. He is a regular reviewer for international conferences/journals in computer vision. He has also served several international conferences, including PSIVT’19 (area chair), MPR’19 (program chair), IPSJ’21 (session chair), 3DV’20 (local chair). He received the MIRU Nagao Award in 2024.
Talk Title (TBD)
Comming Soon…

Dinesh Manocha
University of Maryland, College Park
Short Bio
Dinesh Manocha is the Paul Chrisman Iribe Professor of Computer Science and Electrical and Computer Engineering and a Distinguished University Professor at the University of Maryland, College Park. His research interests include virtual environments, physically based modeling, and robotics. His group has developed a number of software packages that are standard and licensed to 60+ commercial vendors. He has published more than 750 papers and supervised 50 PhD dissertations. He is a fellow of the Association for the Advancement of Artificial Intelligence, the American Association for the Advancement of Science, the ACM, the Institute of Electrical and Electronics Engineers (IEEE), and the National Academy of Inventors. He is also a member of the ACM’s Special Interest Group on Computer Graphics and Interactive Techniques and the IEEE Visualization and Graphics Technical Community’s Virtual Reality Academy. Manocha is the recipient of a Pierre Bézier Award from the Solid Modeling Association, a Distinguished Alumni Award from the Indian Institute of Technology Delhi, and a Distinguished Career Award in Computer Science from the Washington Academy of Sciences. He was also a co-founder of Impulsonic, a developer of physics-based audio simulation technologies that was acquired by Valve Corporation in November of 2016.
Robot Navigation in Complex Indoor and Outdoor Environments
In the last few decades, most robotics success stories have been limited to structured or controlled environments. A major challenge is to develop robot systems that can operate in complex or unstructured environments corresponding to homes, dense traffic, outdoor terrains, public places, etc. In this talk, we give an overview of our ongoing work on developing robust planning and navigation technologies that use recent advances in computer vision, sensor technologies, machine learning, and motion planning algorithms. We present new methods that utilize multi-modal observations from an RGB camera, 3D LiDAR, and robot odometry for scene perception, along with deep reinforcement learning for reliable planning. The latter is also used to compute dynamically feasible and spatial aware velocities for a robot navigating among mobile obstacles and uneven terrains. The latter is also used to compute dynamically feasible and spatial aware velocities for a robot navigating among mobile obstacles and uneven terrains. We have integrated these methods with wheeled robots, home robots, and legged platforms and highlight their performance in crowded indoor scenes, home environments, and dense outdoor terrains.

Gregory F. Welch
University of Central Florida
Short Bio
Gregory Welch is a Pegasus Professor and the AdventHealth Endowed Chair in Healthcare Simulation at the University of Central Florida (UCF), with appointments in the College of Nursing, the College of Engineering & Computer Science (Computer Science), and the Institute for Simulation & Training, and is a Co-Director of the Synthetic Reality Laboratory. He received a B.S. degree in Electrical Engineering Technology from Purdue University in 1986, withHighest Distinction, and a Ph.D. in Computer Science from the University of North Carolina at Chapel Hill in 1996. Prior to UCF he was a Research Professor at UNC, he worked on the Voyager Spacecraft Project at NASA’s Jet Propulsion Laboratory, and on airborne electronic countermeasures at Northrop-Grumman’s Defense Systems Division. He conducts research in areas including virtual and augmented reality, human-computer interaction, human motion tracking, and human surrogates for training and practice, with a focus on applications such as healthcare and defense. He has co-chaired numerous international conferences, workshops, and seminars in these areas, co-authored over 150 associated publications, and is a co-inventor on multiple patents. His 1995 introductory article on the Kalman filter has been cited over 9000 times. His awards include the 2018 Institute of Electrical and Electronics Engineers (IEEE) Virtual Reality Technical Achievement Award, and the 2016 IEEE International Symposium on Mixed and Augmented Reality’s Long Lasting Impact Paper Award. He is presently serving on the World Economic Forum’s Global Future Council on Virtual Reality and Augmented Reality, the International Virtual Reality Healthcare Association’s Advisory Board, as an Associate Editor for the journals PRESENCE: Virtual and Augmented Reality and Frontiers in Virtual Reality, and as an expert witness on intellectual property matters. He is a Fellow of the IEEE and a Fellow of the National Academy of Inventors (NAI), and a Member of the UCF Chapter of the National Academy of Inventors, the Association for Computing Machinery (ACM), the European Associationfor Computer Graphics, and multiple healthcare-related societies. He is an ACM SIGGRAPH Pioneer and serves as an IEEE Technical Expert for Virtual, Augmented and Mixed Reality.
Beyond XR: The Human Filter
Extended Reality (XR) systems, including Virtual Reality (VR) and Augmented Reality (AR), are rapidly advancing, with growing capabilities to model a user’s behavior, appearance, and surroundings. XR systems can sense head position, posture, eye movement, voice, and even cognitive load, and can display virtual stimuli through standard sensory channels in ways that may be indistinguishable from real-world stimuli. While today’s XR systems are almost exclusively dedicated to the practice of what we would typically think of as XR, e.g., for training, education, or entertainment, they could do so much for humans beyond simply “doing XR.”
In this talk I will discuss leveraging the nexus of developments in XR systems, smartphones, and smart watches, and well-established principles and mechanisms from control theory, to develop a holistic, principled, and generalized means for the continuous optimal estimation of a range of intrinsic human characteristics. I will also discuss how, in a complementary manner, head-worn and other devices could be used to produce visual, aural, and tactile stimuli for individual users at any moment, in the context of whatever they are doing, to influence the user in helpful ways. I will motivate the ideas, discuss a possible theoretical framework, and some example application areas.

Guodong Rong
Meta Reality Labs
Short Bio
Dr. Guodong Rong received his Ph.D. degree from National University of Singapore. He is currently a software engineer at Meta Reality Labs as the tech lead of VR compositor. He has over 20 years of experience in graphics and VR related areas in both academia and industry. Before joining Meta, he has worked in NVIDIA, Samsung, Google, Huawei, Baidu, and LG as a software engineer, and in University of Texas at Dallas as a postdoc researcher. His research interests include computer graphics, VR/AR, computational geometry, and autonomous driving simulation.
Why is VR Graphis Hard?
VR Graphics has lots of unique properties which raise great challenges to achieve good user experience. This talk will explain some of these challenges related to VR system hardware, as well as to human factors, so that the audience can learn why VR graphics is hard. Some optimization techniques will also be briefly covered to show how Meta addresses some of those challenges in their VR devices.

Luciana Nedel
UFRGS
Short Bio
Luciana Nedel is a full professor at the Institute of Informatics of UFRGS, where she has been teaching and doing research in virtual reality, interactive visualization, and human-computer interaction since 2002. She received her PhD in Computer Science from the Swiss Federal Institute of Technology (EPFL) in Lausanne, Switzerland, in 1998. In her research career, she has been involved in projects with industry and in cooperation with different Universities abroad. Her main research interests include virtual and augmented reality, immersive visual analytics, and 3D User interfaces (3DUI). She is a member of IEEE, ACM, and SBC, where she contributed as program committee chair many times: IEEE VR 2025 full papers, Interact 2025 short papers, IEEE VR 2022 journal papers, etc. She is also an associated editor for IEEE TVCG (Transactions on Visualization and Computer Graphics, Computers & Graphics, IEEE Computer Graphics & Applications, The Visual Computer Journal (TVC), Frontiers in Virtual Reality, and SBC JBCS (Journal of the Brazilian Computer Society).
Talk Title (TBD)
Comming Soon…

Marcio Filho
ACJOGOS-RJ
Short Bio
Márcio Filho é uma das principais lideranças institucionais do setor de jogos eletrônicos no Brasil, tendo atuado como presidente reeleito da Associação de Criadores de Jogos do Estado do RJ (ACJOGOS-RJ), no mandato 2024–2026. Participou ativamente da formulação e articulação do Marco Legal dos Games (Lei Federal nº 14.852/2024), aprovado em 2024, consolidando-se como um dos principais articuladores políticos da regulamentação do setor no país. Também foi parecerista do 1º Edital Público de Cultura Geek do Brasil (Niterói-RJ) e conselheiro de Orçamento Participativo da Cultura entre 2021 e 2023. Sua atuação inclui ainda a organização de simpósios acadêmicos nacionais sobre jogos, realidade virtual e computação, em parceria com a Sociedade Brasileira de Computação.
Com mais de 16 anos de experiência no desenvolvimento de jogos e soluções gamificadas, Márcio é fundador da GF Corp e criador da plataforma CASE — referência internacional em inovação baseada em jogos para educação e capacitação. É certificado em Gamification pela Wharton Business School (UPenn) e especialista em ensino virtual pela University of Columbia-Irvine (UCI). Entre 2008 e 2025, desenvolveu mais de 60 jogos para organizações como SESI, SESC, FURNAS, entre outras, além de registrar mais de uma dezena de propriedades intelectuais no INPI. Sua trajetória combina excelência técnica, pensamento estratégico e forte articulação entre o setor criativo e as políticas públicas.
Talk Title (TBD)
Comming Soon…

MIng Lin
University of Maryland at College Park
Short Bio
Ming C Lin is currently Distinguished University Professor, Barry Mersky and Capital One E-Nnovate Endowed Professor of Computer Science at the University of Maryland at College Park. She is also an Amazon Scholar, former Elizabeth Stevinson Iribe Chair of Computer Science at UMD and John R. & Louise S. Parker Distinguished Professor Emerita of Computer Science at the University of North Carolina (UNC) – Chapel Hill. She received her B.S., M.S., Ph.D. degrees in Electrical Engineering and Computer Science respectively from the University of California, Berkeley. She is a Fellow of National Academy of Inventors, ACM, IEEE, Eurographics, ACM SIGGRAPH Academy, and IEEE VR Academy.
Dynamics-Aware Learning: From Simulated Reality to the Physical World
In this talk, we present an overview of some of our recent works on the differentiable programming paradigm for learning, control, and inverse modeling. These include using dynamics-inspired, learning-based algorithms for detailed garment recovery from video and 3D human body reconstruction from single- and multi-view images, to differentiable physics for robotics, quantum computing and VR applications. Our approaches adopt statistical, geometric, and physical priors and a combination of parameter estimation, shape recovery, physics-based simulation, neural network models, and differentiable physics, with applications to virtual try-on and robotics. We conclude by discussing possible future directions and open challenges.

Soraia Raupp Musse
PUC/RS
Short Bio
Soraia Raupp Musse is a Full Professor at the Polytechnic School of PUCRS (Pontifical Catholic University of Rio Grande do Sul, Brazil) and a CNPq Productivity Fellow. She holds degrees in Computer Science from PUCRS (BSc, 1990), UFRGS (MSc, 1994), and EPFL in Switzerland (MSc, 1997; Ph.D., 2000), with a postdoctoral fellowship at the University of Pennsylvania (2016). Her research focuses on graphics processing, including virtual humans, crowd simulation, visual perception, and computer vision. She has authored over 220 publications in leading journals and conferences such as Elsevier Computers & Graphics, IEEE TVCG, Computer Graphics Forum, SIGGRAPH, and MIG, and co-authored four internationally published books with Springer-Verlag, including the first book on Crowd Simulation. She has supervised more than 180 theses and served on over 140 academic committees. Her work has been recognized with 48 awards, including the Google Research Award (2022), the Santander Science and Innovation Award (2013), and the Finep Innovation Award (2003). She is currently Editor-in-Chief of the Journal of the Brazilian Computer Society (JBCS) and has chaired numerous conferences, including service on national research committees for CNPq and CAPES. In 2024, she was honored as the Featured Researcher at SBGames, South America’s premier conference on digital games, and will serve as a keynote speaker at SIBGRAPI, SVR, and SBGames 2025. She also coordinates the newly established INCT-SiM-AI, a Brazilian National Institute of Science and Technology focused on AI-driven personalized solutions for climate disaster response.
From Human Bias to Embodied AI: Shaping the Future of Virtual Humans
This talk explores the evolution of virtual humans, tracing their development from early computer graphics representations to today’s intelligent, embodied agents. We begin by revisiting the historical and conceptual foundations of virtual humans and their roles in simulations and entertainment, highlighting how these milestones have shaped the way we perceive and design VHs in digital environments. The discussion then focuses on two contemporary aspects of virtual humans: first, understanding human perception and bias toward VHs, and second, the rise of Embodied Conversational Agents (ECAs), with an emphasis on how advances in speech, emotion modeling, and non-verbal behavior have enhanced human–agent interaction. Building on these trends, we examine how integrating ECAs and virtual humans with Large Language Models (LLMs) is significantly enhancing agents’ ability to reason, contextualize, and engage in fluid, human-like dialogue. The talk concludes by reflecting on the future of embodied interaction, outlining the opportunities and challenges emerging at the intersection of computer graphics, cognitive modeling, and generative AI.